I've configured celery in my django app in order to run a task every morning. The task simply sends an email to a group of users. The problem is that the same email is being sent a few hundred times!!
This is my celery config:
BROKER_URL = 'redis://127.0.0.1:6379/0'
BROKER_TRANSPORT = 'redis'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
from celery.schedules import crontab
CELERYBEAT_SCHEDULE = {
'alert_user_is_not_buying-everyday-at-7': {
'task': 'opti.tasks.alert_users_not_buying',
'schedule': crontab(hour=7, minute=0),
},
}
and the task is:
#app.task(bind=True)
def alert_user_is_not_buying(self):
send_mail_to_users()
And I use this commands to start the worker and beat (I use supervisor for that):
exec celery --app=opti beat --loglevel=INFO
exec celery --app=opti worker --loglevel=INFO
I believe that there's no problem wih my send_mail_to_users() method, It looks like the emails are sent every 30 seconds....
What is missing?
Your CELERYBEAT_SCHEDULE setting is likely going unused, as you have CELERYBEAT_SCHEDULER set to use the DatabaseScheduler. How is that scheduler configured? I would guess that's where the problem is coming from.
Related
I have a django app + redis on one server and celery on another server. I want to call celery task from django app.
My task.py on Celery Server:
from celery import Celery
app = Celery('tasks')
app.conf.broker_url = 'redis://localhost:6379/0'
#app.task(bind=True)
def test():
print('Testing')
Calling the celery task from Django Server:
from celery import Celery
celery = Celery()
celery.conf.broker_url = 'redis://localhost:6379/0'
celery.send_task('tasks.test')
I am running the celery worker using this command:
celery -A tasks worker --loglevel=INFO
When i call the celery task from django, it pings the celery server but i get the following error:
Received unregistered task of type 'tasks.test'. The message has been
ignored and discarded.
Did you remember to import the module containing this task? Or maybe
you're using relative imports?
How to fix this or is there any way to call the task?
Your task should be a shared task within celery as follows:
tasks.py
from celery import Celery, shared_task
app = Celery('tasks')
app.conf.broker_url = 'redis://localhost:6379/0'
#shared_task(name="test")
def test(self):
print('Testing')
and start celery as normal:
celery -A tasks worker --loglevel=INFO
Your application can then call your test task:
main.py
from celery import Celery
celery = Celery()
celery.conf.broker_url = 'redis://localhost:6379/0'
celery.send_task('tasks.test')
I'm trying to use celery and celery beat to run a scheduled task to process data from the database, but when I try to run the task I get this error "django.db.utils.OperationalError: FATAL: role "tanaka" does not exist". The code for the scheduled task is shown below
settings.py
CELERY_BEAT_SCHEDULE = {
'task-number-one': {
'task': 'loans.tasks.update_loan_book',
'schedule': 60,
},
}
tasks.py
#shared_task
def update_loan_book():
tenants = Tenant.objects.all()
for tenant in tenants:
#logic to update tenant object
The code works when I run the task using the "celery -A proj worker -l info -B" command but does not work when I daemonize celery and celery beat. Config files for celery and celery beat are shown below. I am using supervisord.
[program:projworker]
command=/home/tanaka/microfinance/bin/celery -A cloud_based_microfinance worker -l info
directory=/home/tanaka/Repositories/microfinance_project
user=tanaka
numprocs=1
stdout_logfile=/var/log/celery/proj_worker.log
stderr_logfile=/var/log/celery/proj_worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=truepriority=998
[program:projbeat]
command=/home/tanaka/microfinance/bin/celery -A cloud_based_microfinance beat -l info
directory=/home/tanaka/Repositories/microfinance_project
user=tanaka
numprocs=1
stdout_logfile=/var/log/celery/proj_beat.log
stderr_logfile=/var/log/celery/proj_beat.log
autostart=true
autorestart=true
startsecs=10
priority=999
When I try to run the task as a daemon I get "django.db.utils.OperationalError: FATAL: role "tanaka" does not exist" in the proj_worker.log file.
Solved the problem, if your task is going to access the database your first need a db connection
from django.db import connection
#shared_task
def update_loan_book():
with connection.cursor() as cursor:
tenants = Tenant.objects.all()
for tenant in tenants:
#logic to update tenant object
Celery and django are two different processes therefore you have to import db connection from django for celery to access the db. If you are not using django ORM this is a helpful resource
I have Django+Celery in Heroku, and Celery is set up as:
import djcelery
djcelery.setup_loader()
BROKER_URL = "django://" # tell kombu to use the Django database as the message queue
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
CELERY_ALWAYS_EAGER = False
CELERY_TIMEZONE = 'Europe/Madrid'
I have 2 tasks defined in tasks.py, one periodic and another that is executed on asynchronous calls:
#task
def test_one_shot():
print "One shot"
#periodic_task(run_every=crontab(minute="*/5"))
def test_periodic():
print "Periodic"
Heroku is configured with a main web worker and a auxiliar worker:
web: gunicorn config.wsgi:application ON
worker: python manage.py celery worker -B -l info ON
With this setup, I run the test_one_shot task as follows:
test_one_shot.apply_async(eta=datetime.now()+timedelta(minutes=2))
And although it appears as registered in the heroku logs:
Received task: test.tasks.test_one_shot[f29c609d-b6e8-45d4-808d-2ca690f029af] eta:[2016-08-07 00:09:30.262362+02:00]
It never executes. On the other hand, the periodic task test_periodic is executed as expected. What am I doing wrong?
Thanks!
EDIT: The task was executed was not appearing in the logs due a datetime time aware issue. However when the task is programmatically called, it is never executed.
I end up changing the celery backend to use RabbitMQ in Heroku following this guide, and the problem get solved.
Basically, I installed RabbitMQ on Heroku:
$ heroku addons:add cloudamqp
And set the new configuration for it:
import djcelery
djcelery.setup_loader()
CELERY_TIMEZONE = 'Europe/Madrid'
BROKER_URL = env("CLOUDAMQP_URL", default="django://")
BROKER_POOL_LIMIT = 1
BROKER_CONNECTION_MAX_RETRIES = None
CELERY_TASK_SERIALIZER = "json"
CELERY_ACCEPT_CONTENT = ["json", "msgpack"]
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_ALWAYS_EAGER = False
if BROKER_URL == "django://":
INSTALLED_APPS += ("kombu.transport.django",)
Here is my celery file:
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ets.settings')
from django.conf import settings # noqa
app = Celery('proj',
broker='redis://myredishost:6379/0',
backend='redis://myredishost:6379/0',
include=['tracking.tasks'])
# Optional configuration, see the application user guide.
app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
)
if __name__ == '__main__':
app.start()
Here is my task file:
#app.task
def escalate_to_sup(id, group):
escalation_email, created = EscalationEmail.objects.get_or_create()
escalation_email.send()
return 'sup email sent to: '+str(group)
#app.task
def escalate_to_fm(id, group):
escalation_email, created = EscalationEmail.objects.get_or_create()
escalation_email.send()
return 'fm email sent to: '+str(group)
I start the worker like this:
celery -A ets worker -l info
I have also tried to add concurrency like this:
celery -A ets worker -l info --concurrency=10
I attempt to call the tasks above with the following:
from tracking.tasks import escalate_to_fm, escalate_to_sup
def status_change(equipment):
r1 = escalate_to_sup.apply_async((equipment.id, [1,2]), countdown=10)
r2 = escalate_to_fm.apply_async((equipment.id, [3,4]), countdown=20)
print r1.id
print r2.id
This prints:
c2098768-61fb-41a7-80a2-f79a73570966
23959fa3-7f80-4e20-a42f-eef75e9bedeb
The escalate_to_sup and escalate_fm functions log to the worker intermittently. At least 1 executes, but never both.
I have tried spinning up more workers, and then both tasks execute. I do this like:
celery -A ets worker -l info --concurrency=10 -n worker1.%h
celery -A ets worker -l info --concurrency=10 -n worker2.%h
The problem is I don't know how many of the tasks might execute concurrently so spinning up a worker for every possible tasks to execute is not feasible.
Does celery expect a work for every active task?
How do I execute multiple tasks with a single worker?
I'm trying to use MongoDB as the message queue for Celery (in a Django app). The current development version of Celery (2.2.0rc2) is supposed to let you do this, but I can't seem to get any workers to pick up tasks I'm creating.
Versions:
celery v2.2.0rc3
mongodb 1.6.5
pymongo 1.9
django-celery 2.2.0rc2
In my settings, I have:
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {
# Shouldn't need these - defaults are correct.
"host": "localhost",
"port": 27017,
"database": "celery",
"taskmeta_collection": "messages",
}
BROKER_BACKEND = 'mongodb'
BROKER_HOST = "localhost"
BROKER_PORT = 27017
BROKER_USER = ""
BROKER_PASSWORD = ""
BROKER_VHOST = ""
import djcelery
djcelery.setup_loader()
I've created a test tasks.py file as follows:
from celery.decorators import task
#task()
def add(x, y):
return x + y
If I fire up celeryd in the background, it appears to start normally. I then open a python shell and run the following:
>>> from myapp.tasks import add
>>> result = add.delay(5,5)
>>> result
<AsyncResult: 7174368d-288b-4abe-a6d7-aeba987fa886>
>>> result.ready()
False
Problem is that no workers ever pick up the tasks. Am I missing a setting or something? How do I point celery to the message queue?
We had this same issue. While the doc says all tasks should be registered in Celery by calling
import djcelery
djcelery.setup_loader()
it wasn't working properly. So, we still used the
CELERY_IMPORTS = ('YOUR_APP.tasks',)
setting in settings.py. Also, make sure you restart Celery if you add a new task because Celery has to register the tasks when it first starts.
Django, Celerybeat and Celery with MongoDB as the Broker
Remember that Kombu work only with mongo 1.3+ because it need the functionality findandmodify.
If you are on ubuntu the last version in repository is the 1.2, than doesn't work.
Maybe you have also to set
BROKER_VHOST = "dbname"
Keep me posted if it works
Be sure to add this to your settings, or the workers can't find the task and will fail silently.
CELERY_IMPORTS = ("namespace", )
I had the same issue but when I upgraded to celery 2.3.3 everything worked like a charm.