I'm been trying to schedule a task for celery to do at the background but I keep encountering issues.
Quick reference on what I've done so far
I installed celery and django-celery via easy_install, added to INSTALLED_APPS:
'djcelery',
'kombu.transport.django',
imported the following and ran syncdb:
import djcelery
djcelery.setup_loader()
BROKER_URL = "django://"
The task I'm trying to run in the background is password reset. So when a user forget his password , I want to make the sending email task run in the background so what I done is
I moved my forgot_password function from views.py into tasks.py so it can run.
My tasks.py:
from django.contrib.auth.views import password_reset
from django.shortcuts import render
from celery.decorators import task
#task()
def forgot_password(request):
if request.method == 'POST':
return password_reset(request,
from_email=request.POST.get('email'))
else:
return render(request, 'forgot_password.html')
Their nothing in views.py now.
The problem is even though, I can send email if I lose my password. I'm not sure if it's running in the background. What I did to check is was:
manage.py celery worker --loglevel=info
but I get an error: KeyError: 'processName':
C:\o\17\mysite>manage.py celery worker --loglevel=info
-------------- celery#gg-PC v3.0.19 (Chiastic Slide)
---- **** -----
--- * *** * -- Windows-Vista-6.0.6001-SP1
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> broker: django://localhost//
- ** ---------- .> app: default:0x319c930 (djcelery.loaders.DjangoLoader
)
- ** ---------- .> concurrency: 2 (processes)
- *** --- * --- .> events: OFF (enable -E to monitor this worker)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery: exchange:celery(direct) binding:celery
[Tasks]
. accounts.tasks.forgot_password
[2013-05-15 17:49:45,279: WARNING/MainProcess] C:\Python26\lib\site-packages\dja
ngo_celery-3.0.17-py2.6.egg\djcelery\loaders.py:133: UserWarning: Using settings
.DEBUG leads to a memory leak, never use this setting in production environments
!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2013-05-15 17:49:45,292: WARNING/MainProcess] celery#gg-PC ready.
[2013-05-15 17:49:45,292: INFO/MainProcess] consumer: Connected to django://loca
lhost//.
Traceback (most recent call last):
File "C:\Python26\lib\logging\__init__.py", line 754, in emit
msg = self.format(record)
File "C:\Python26\lib\logging\__init__.py", line 637, in format
return fmt.format(record)
File "C:\Python26\lib\logging\__init__.py", line 428, in format
s = self._fmt % record.__dict__
KeyError: 'processName'
Traceback (most recent call last):
File "C:\Python26\lib\logging\__init__.py", line 754, in emit
msg = self.format(record)
File "C:\Python26\lib\logging\__init__.py", line 637, in format
return fmt.format(record)
File "C:\Python26\lib\logging\__init__.py", line 428, in format
s = self._fmt % record.__dict__
KeyError: 'processName'
Can someone please kindly help me by tell me if forget password is configured probably to send emails in the background and why do I get this error KeyError: 'processName'?
First of all there is a simple way of sending emails using celery:
django-celery-email
Second, your task is wrong. Task is a background job, not view. It should perform the final operation. In your case this is sending email
Related
I am using Django signals to trigger a task (sending mass emails to subscribers using Django celery package)when an admin post a blogpost is created from Django admin. The signal is triggered but the task function in the task file is not called. It's because I put a print function which is not printing inside the task function.
My signlas.py file:
from apps.blogs.celery_files.tasks import send_mails
from apps.blogs.models import BlogPost,Subscribers
from django.db.models.signals import post_save
from django.dispatch import receiver
def email_task(sender, instance, created, **kwargs):
if created:
print("#signals.py")
send_mails.delay(5)
post_save.connect(email_task, sender=BlogPost,dispatch_uid="email_task")
My task.py file
from __future__ import absolute_import, unicode_literals
from celery import shared_task
# from celery.decorators import task
from apps.blogs.models import BlogPost,Subscribers
from django.core.mail import send_mail
from travel_crm.settings import EMAIL_HOST_USER
from time import sleep
#shared_task
def send_mails(duration,*args, **kwargs):
print("#send_mails.py")
subscribers = Subscribers.objects.all()
blog = BlogPost.objects.latest('date_created')
for abc in subscribers:
sleep(duration)
print("i am inside loop")
emailad = abc.email
send_mail('New Blog Post ', f" Checkout our new blog with title {blog.title} ",
EMAIL_HOST_USER, [emailad],
fail_silently=False)
Here. the print("#send_mails.py") is not executed but print("#signals.py") in signals.py file is executed. Hence, signals is received after the Blogpost model object is created but the function inside task.py which is send_mails is not executed.
I have installed both celery and redis server and both are working fine.
The main thing is if I remove .delay(5) from signal file and instead used just send_mails() inside email_task , it works perfectly and i am getting emails. But as soon as I add delay() function, the fucntion inside task file is not called. What is the issue??
My traceback when I run worker info:
-------------- celery#DESKTOP-AQPSFR9 v5.1.2 (sun-harmonics)
--- ***** -----
-- ******* ---- Windows-10-10.0.18362-SP0 2021-07-18 11:06:10
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: travel_crm:0x15c262afcd0
- ** ---------- .> transport: redis://localhost:6379//
- ** ---------- .> results: redis://localhost:6379/
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. apps.blogs.celery_files.celery.debug_task
. apps.blogs.celery_files.tasks.send_mails
. travel_crm.celery.debug_task
[2021-07-18 11:06:11,465: INFO/SpawnPoolWorker-1] child process 9276 calling self.run()
[2021-07-18 11:06:11,475: INFO/SpawnPoolWorker-2] child process 8792 calling self.run()
[2021-07-18 11:06:11,496: INFO/SpawnPoolWorker-4] child process 1108 calling self.run()
[2021-07-18 11:06:11,506: INFO/SpawnPoolWorker-3] child process 7804 calling self.run()
[2021-07-18 11:06:13,145: INFO/MainProcess] Connected to redis://localhost:6379//
[2021-07-18 11:06:17,206: INFO/MainProcess] mingle: searching for neighbors
[2021-07-18 11:06:24,287: INFO/MainProcess] mingle: all alone
[2021-07-18 11:06:32,396: WARNING/MainProcess] c:\users\user\desktop\travelcrm\myvenv\lib\site-packages\celery\fixups\django.py:203: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
warnings.warn('''Using settings.DEBUG leads to a memory
[2021-07-18 11:06:32,396: INFO/MainProcess] celery#DESKTOP-AQPSFR9 ready.
[2021-07-18 11:06:32,596: INFO/MainProcess] Task apps.blogs.celery_files.tasks.send_mails[6bbac0ae-8146-4fb0-b64b-a07755123e1d] received
[2021-07-18 11:06:32,612: INFO/MainProcess] Task apps.blogs.celery_files.tasks.send_mails[25d3b32a-f223-4ae4-812b-fa1cfaedaddd] received
[2021-07-18 11:06:34,633: ERROR/MainProcess] Task handler raised error: ValueError('not enough values to unpack (expected 3, got 0)')
Traceback (most recent call last):
File "c:\users\user\desktop\travelcrm\myvenv\lib\site-packages\billiard\pool.py", line 362, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\user\desktop\travelcrm\myvenv\lib\site-packages\celery\app\trace.py", line 635, in fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
[2021-07-18 11:06:34,633: ERROR/MainProcess] Task handler raised error: ValueError('not enough values to unpack (expected 3, got 0)')
Traceback (most recent call last):
File "c:\users\user\desktop\travelcrm\myvenv\lib\site-packages\billiard\pool.py", line 362, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\users\user\desktop\travelcrm\myvenv\lib\site-packages\celery\app\trace.py", line 635, in fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)
The stack trace helps identify that the issue has to do with calling the celery function.
ValueError: not enough values to unpack (expected 3, got 0)
Which is this part of the code:
send_mails.delay(5)
Try calling the function using apply_async instead.
send_mails.apply_async(args=(5, ))
If that doesn’t work remove *arg and **kwargs from def send_mails(duration):. I did not see why those parameters are necessary.
More information can be found in this answer: https://stackoverflow.com/a/48910727/7838574
Or in the Docs here: https://docs.celeryproject.org/en/latest/userguide/calling.html#basics
total celery and django noob here, so sorry if the problem is trivial. Basically the problem is that any function defined by #app.task is not being processed by celery, it just runs normally as if celery isn't there.
My celery_app.py file is -
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery(broker=settings.CELERY_BROKER_URL)
app.config_from_object('django.conf:settings')
app.autodiscover_tasks()
if __name__ == '__main__':
app.start()
While my tasks.py file is -
from project.celery_app import app
#app.task
def mytask():
...
I get the following output on running celery in the terminal -
-------------- celery#LAPTOP v4.1.0 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.16299-SP0 2017-12-20 19:27:24
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: __main__:0x229ce2884e0
- ** ---------- .> transport: amqp://user:**#localhost:5672/myvhost
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 8 (solo)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. account.tasks.mytask
[2017-12-20 19:27:24,085: INFO/MainProcess] Connected to amqp://user:**#127.0.0.1:5672/myvhost
[2017-12-20 19:27:24,101: INFO/MainProcess] mingle: searching for neighbors
[2017-12-20 19:27:25,126: INFO/MainProcess] mingle: all alone
[2017-12-20 19:27:25,141: WARNING/MainProcess] c:\programdata\anaconda2\envs\myenv\lib\site- packages\celery\fixups\django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2017-12-20 19:27:25,141: INFO/MainProcess] celery#LAPTOP- ready.
So my task is known to celery, but it doesn't do anything about it. The task runs on a button click, and using -loglevel=debug it is seen that celery isn't affected by it. I am using RabbitMQ as broker, celery 4.1.0, python3 and django-1.10.5. Any help would be greatly appreciated!
As I had thought, a simple mistake. Just needed to change mytask() to mytask.delay() and celery started receiving it.
.delay() is actually a shorcut method. If want to provide additional options you have to use .apply_async()
official doc can be found here:
http://docs.celeryproject.org/en/latest/userguide/calling.html
I'm setting up celery based on an example and at this point...
$ export PYTHONPATH=/webapps/hello_django/hello:$PYTHONPATH
$ /webapps/hello_django/bin/celery --app=hello.celery:app worker --loglevel=INFO
on my end set as
samuel#samuel-pc:~/Documents/code/revamp$ export PYTHONPATH=/home/samuel/Documents/code/revamp/gallery:$PYTHONPATH
samuel#samuel-pc:~/Documents/code/revamp$ /home/samuel/Documents/code/revamp/revamp/celery --app=revamp.celery:app worker --loglevel=INFO
bash: /home/samuel/Documents/code/revamp/revamp/celery: No such file or directory
not sure what it did to the path and this is what the result should be
-------------- celery#django v3.1.11 (Cipater)
---- **** -----
--- * *** * -- Linux-3.2.0-4-amd64-x86_64-with-debian-7.5
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: hello_django:0x15ae410
- ** ---------- .> transport: redis://localhost:6379/0
- ** ---------- .> results: disabled
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. testapp.tasks.test
[2014-05-20 13:53:59,740: INFO/MainProcess] Connected to redis://localhost:6379/0
[2014-05-20 13:53:59,748: INFO/MainProcess] mingle: searching for neighbors
[2014-05-20 13:54:00,756: INFO/MainProcess] mingle: all alone
[2014-05-20 13:54:00,769: WARNING/MainProcess] celery#django ready.
my guess is I need to set path to the path for celery installation, if so anyone who can tell me the path.
Had lots of headeaches with celery tutorials out there. Try this:
First thing you need a virtual enviroment for your project, so you dont need to setup paths.
bash:
sudo pip3 install virtualenv
virtualenv env
source env/bin/activate
Then you need a django project and an app.
bash:
pip install django
django-admin starproject myproject
cd myproject
python manage.py startapp myapp
Then you should pip install celery
Next make a tasks.py on the same level of your view.py on myapp directory:
tasks.py
from celery import Celery
from celery.decorators import task
app = Celery('tasks', broker='pyamqp://guest#localhost//')
#task(bind=True,name="my_task")
def my_task(self):
print('hello')
return 1+1
Install your broker (rabbitmq)
bash:
sudo apt-get install rabbitmq-server
sudo service rabbitmq-server restart
Go to your app directory, the one that has tasks.py and run celery -A tasks worker --loglevel=info. This only works on the directory that tasks is defined. Then you should have your worker up and runnning. When you print or return something from your task it should appear here.
Finally you should use your task. Setup a view(setup url, make a template, etc.) and call your task from the view:
views.py
from django.shortcuts import render
from .tasks import my_task
def index(request):
my_task.delay()
return render({},'index.html')
The magic is that the delay call is assync, non-blockable.
From this minimalistic example I hope you can understand better the paths you tutorials talk about and do complicated stuff that they make you do: like placing celery settings on settings.py, calling the worker from other directories, putting things on path... This was a real pain when I was trying to learn this from the oficial docs.
Good luck!
I had a Django project where a TwythonStreamer connection is started on a Celery task worker. The connection is started and reloaded as the search terms change. However, in it's current state and prior to updating the project to Celery 3.1.1 it would SIGSEGV when this particular tasks attempts to run. I can execute the same commands of the task in the Django Shell and have it work just fine:
tu = TwitterUserAccount.objects.first()
stream = NetworkStreamer(settings.TWITTER_CONSUMER_KEY, settings.TWITTER_CONSUMER_SECRET, tu.twitter_access_token, tu.twitter_access_token_secret)
stream.statuses.filter(track='foo,bar')
however, with RabbitMQ/Celery running (while in the project's virtualenv) in another window:
celery worker --app=project.app -B -E -l INFO
and try to run:
#task()
def test_network():
tu = TwitterUserAccount.objects.first()
stream = NetworkStreamer(settings.TWITTER_CONSUMER_KEY, settings.TWITTER_CONSUMER_SECRET, tu.twitter_access_token, tu.twitter_access_token_secret)
in the Django shell via:
test_network.apply_async()
the following SIGSEVG error occurs in the Celery window (upon initializing of the NetworkStreamer):
Task project.app.tasks.test_network with id 5a9d1689-797a-4d35-8bf3-9795e51bb0ec raised exception:
"WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).',)"
Task was called with args: [] kwargs: {}.
The contents of the full traceback was:
Traceback (most recent call last):
File "/Users/foo_user/.virtualenvs/project/lib/python2.7/site-packages/billiard/pool.py", line 1170, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 11 (SIGSEGV).
NetworkStreamer is simply an inherited TwythonStreamer (as shown online here).
I have other Celery tasks that run just fine in addition to various Celery Beat tasks. djcelery.setup_loader(), etc is being done. I've tried adjusting various settings (thought it might have been a pickle issue) but am not even passing any parameters. This project structure is how Celery is being setup, named, etc…
BROKER_URL = 'amqp://'
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
CELERY_RESULT_ENGINE_OPTIONS = {"echo": True}
CELERY_TASK_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_BACKEND = 'amqp'
# Short lived sessions, disabled by default
CELERY_RESULT_PERSISTENT = True
CELERY_RESULT_BACKEND = 'amqp'
CELERY_TASK_RESULT_EXPIRES = 18000 # 5 hours.
CELERY_SEND_TASK_ERROR_EMAILS = True
Versions:
Python: 2.7.5
RabbitMQ: 3.3.4
Django==1.6.5
amqp==1.4.5
billiard==3.3.0.18
celery==3.1.12
django-celery==3.1.10
flower==0.7.0
psycopg2==2.5.3
pytz==2014.4
twython==3.1.2
I am using Django with Celery to run two tasks in the background related to contacts/email parsing.
Structure is:
project
/api
/core
tasks.py
settings.py
settings.py file contains:
BROKER_URL = 'django://'
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"
#celery
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
sys.path.append(os.path.dirname(os.path.basename(__file__)))
CELERY_IMPORTS = ['project.core.tasks']
import djcelery
djcelery.setup_loader()
# ....
INSTALLED_APPS = (
#...
'kombu.transport.django',
'djcelery',
)
tasks.py contains:
from celery.task import Task
from celery.registry import tasks
class ParseEmails(Task):
#...
class ImportGMailContactsFromGoogleAccount(Task):
#...
tasks.register(ParseEmails)
tasks.register(ImportGMailContactsFromGoogleAccount)
Also, added in wsgi.py
os.environ["CELERY_LOADER"] = "django"
Now, I have this app hosted on a WebFactional server. On my localhost this runs fine, but on the WebFaction server, where the Django app is deployed on a Apache server, I get:
2013-01-23 17:25:00,067: ERROR/MainProcess] Task project.core.tasks.ImportGMailContactsFromGoogleAccount[df84e03f-9d22-44ed-a305-24c20407f87c] raised exception: Task of kind 'project.core.tasks.ImportGMailContactsFromGoogleAccount' is not registered, please make sure it's imported.
But the tasks show up as registered. If I run
python2.7 manage.py celeryd -l info
I obtain:
-------------- celery#web303.webfaction.com v3.0.13 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: django://localhost//
- ** ---------- . app: default:0x1e55350 (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 8 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[Tasks]
. project.core.tasks.ImportGMailContactsFromGoogleAccount
. project.core.tasks.ParseEmails
I thought it could be a relative import error, but I assumed the changes in settings.py and wsgi.py would prevent that.
I am thinking the multiple Python version supported by WebFactional could have to do with this, however I installed all the libraries for Python 2.7 and I am also running Django for 2.7, so there should be no problem with that.
Running in localhost using celeryd -l info the Tasks also show up in the list when I start the worker but it doesn't output the error when I call the task - it runs perfectly.
Thank you
I had the same issue in a new Ubuntu 12.04 / Apache / mod_wsgi / Django 1.5 / Celery 3.0.13 production environment. Everything works fine on my Mac Os X 10.8 laptop and my old server (which has Celery 3.0.12), but not on the new server.
It seems there is some issue in Celery:
https://github.com/celery/celery/issues/1150
My initial solution was changing my Task class based task to #task decorator based, from something like this:
class CreateInstancesTask(Task):
def run(self, pk):
management.call_command('create_instances', verbosity=0, pk=pk)
tasks.register(CreateInstancesTask)
to something like this:
#task()
def create_instances_task(pk):
management.call_command('create_instances', verbosity=0, pk=pk)
Now this task seems to work, but of course I've to do some further testing...