I am trying to use django-celery in my project
In settings.py I have
CELERY_RESULT_BACKEND = "amqp"
The server started fine with
python manage.py celeryd --setting=settings
But if I want to access a result from a delayed task, I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\celery\result.py", line 108, in ready
return self.status in self.backend.READY_STATES
File "C:\Python27\lib\site-packages\celery\result.py", line 196, in status
return self.state
File "C:\Python27\lib\site-packages\celery\result.py", line 191, in state
return self.backend.get_status(self.task_id)
File "C:\Python27\lib\site-packages\celery\backends\base.py", line 404, in _is
_disabled
raise NotImplementedError("No result backend configured. "
NotImplementedError: No result backend configured. Please see the documentation
for more information.
It is very strange because when I just run celeryd (with the same celery settings), it works just fine. Has anyone encountered this problem before?
Thanks in advance!
I had the same problem while getting the result back from the celery task although the celery task was executed ( console logs). What i found was, i had the same setting CELERY_RESULT_BACKEND = "redis" in django settings.py but i had also instantiated celery in the tasks.py
celery = Celery('tasks', broker='redis://localhost') - which i suppose overrides the settings.py property and hence it was not configuring the backend server for my celery instance which is used to store the results.
i removed this and let django get celery get properties from settings.py and the sample code worked for me.
If you're just running the samples from http://www.celeryproject.org/tutorials/first-steps-with-celery/, you need to run the console via manage.py:
% python manage.py shell
For those who are in a desperate search for a solution like I was.
Put this line at the end of the settings.py script:
djcelery.setup_loader()
Looks like django-celery is not going to consider it's own settings without a strict order.
In my case, the problem was that I was passing the CELERY_RESULT_BACKEND argument to the celery constructor:
Celery('proj',
broker = 'amqp://guest:guest#localhost:5672//',
CELERY_RESULT_BACKEND='amqp://',
include=['proj.tasks'])
The solution was to use the backend argument instead:
Celery('proj',
broker = 'amqp://guest:guest#localhost:5672//',
backend='amqp://',
include=['proj.tasks'])
Some how the console has to have django environment set up in order to pick up the settings. For example, in PyCharm you can run django console, in which everything works as expected.
See AMQP BACKEND SETTINGS for better understanding
Note The AMQP backend requires RabbitMQ 1.1.0 or higher to
automatically expire results. If you are running an older version of
RabbitMQ you should disable result expiration like this:
CELERY_TASK_RESULT_EXPIRES = None
Try adding the below line to your settings.py:
CELERY_TASK_RESULT_EXPIRES = 18000 # 5 hours
Related
BACKGROUND: I have a Django application and i want to write tests for it. I have written a docker-compose.yml file so that i can be deployed on server quickly and easily. The application works with the help of celery and redis as its broker. The entire process of selenium testing is also being done via docker where i deploy selenium hub and nodes of chrome and firefox.
PROBLEM: When i run the python manage.py test app_name.tests.test_forms my fixtures are loaded and selenium connection is successful, class of the test inherits from StaticLiveServerTestCase. But when i call a celery task from the app_name.tasks.task_name.delay(), obviously after importing it, I noticed that the tasks runs on the my celery worker container that is for the production app and makes the changes that the task is supposed to do in the test database, it does it in the production database.
I have tried multiple possible solutions, like adding this above the test class #override_settings(CELERY_TASK_EAGER_PROPAGATES=True,CELERY_TASK_ALWAYS_EAGER=True,BROKER_BACKEND='memory')
also tried to starting celery_worker in the test's thread using the following in the setUpClass:
cls.celery_worker = start_worker(app,perform_ping_check=False)
cls.celery_worker.__enter__()
and in tearDownCass:
cls.celery_worker.__exit__(None, None, None)
and get this following error:
ERROR/MainProcess] pidbox command error: AttributeError("'NoneType' object has no attribute 'groups'")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/kombu/pidbox.py", line 104, in dispatch
reply = handle(method, arguments)
File "/usr/local/lib/python3.7/site-packages/kombu/pidbox.py", line 126, in handle_cast
return self.handle(method, arguments)
File "/usr/local/lib/python3.7/site-packages/kombu/pidbox.py", line 120, in handle
return self.handlers[method](self.state, **arguments)
File "/usr/local/lib/python3.7/site-packages/celery/worker/control.py", line 279, in enable_events
if dispatcher.groups and 'task' not in dispatcher.groups:
AttributeError: 'NoneType' object has no attribute 'groups'
But even though this error is encountered, the changes in the production database still happen.
I am missing something, can anyone help me with how to make celery task make changes in test database.
Am i following a bad practice? I just hope that i can make my selenium fill the forms required for testing and the celery tasks do their job and update the database after some processing and i can assert them in the test class.
Thank you, any help will be highly appreciated.
NOTE: The celery tasks are nested, i.e they call each other via .delay() or .apply_async() calls.
I am trying to extend my django app with celery crontab functionality. For this purposes i created celery.py file where i put code as mentioned in official documentation.
Here is my the code from project/project/celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE','project.settings')
app=Celery('project')
app.config_from_object('django.conf::settings',namespace='CELERY')
Than inside my project/settings.py file i specify related to celery configs as follow
CELERY_TIMEZONE = "Europe/Moscow"
CELERYBEAT_SHEDULE = {
'test_beat_tasks':{
'task':'webhooks.tasks.adding',
'schedule':crontab(minute='*/1),
},
}
Than i run worker an celery beat in the same terminal by
celery -A project worker -B
But nothing happened i mean i didnt see that my celery beat task printing any output while i expected that my task webhooks.tasks.adding will execute
Than i decided to check that celery configs are applied. For this purposes in command line **python manage.py shell i checked celery.app.conf object
#i imported app from project.celery.py module
from project import celery
#than examined app configs
celery.app.conf
And inside of huge config's output of celery configs i saw that timezone is set to None
As i understand my problem is that initiated in project/celery.py app is ignoring my project/settings.py CELERY_TIMEZONE and CELERY_BEAT_SCHEDULE configs but why so? What i am doing wrong? Please guide me
After i spent so much time researching to solve this problem i found that my mistake was inside how i run worker and celery beat. While running worker as i did it wouldnt execute task in the terminal. To see is task is executing i should run it as follow celery -A project worker -B -l INFO or instead of INFO if you want more detailed output DEBUG can be added. Hope it will help anyone
I have a python script with indefinite loop it keeps in checking for the data, now I am confused how do I execute it so that it keeps running if the server is running.
I think the script should run just after I run server in django but how do I run this script?
Any suggestions?
django-admin runserver is only for development use. Assuming this is a development environment, probably simplest to just run your script in a separate console, or make a simple shell script that starts the django server, backgrounds that process, then runs your script.
#!/bin/sh
django-admin runserver &
/path/to/my/script.py &
You'll need to kill those processes manually before re-running that script.
In a production environment, use WSGI to run Django and something like supervisord to run the other script. You could configure the OS init system (probably systemd) to ensure both of these tasks start on reboot and remain running.
The first big part is to pack your indefinite loop into a function, and use huey/celery/etc to make it a 'task' that runs asynchronously.Then, call this task to run in your views.py/models.py/manage.py.
HOW TO LET THE TASK RUN ASYNCLY:
HUEY:[https://huey.readthedocs.io/en/latest/]
HOW TO RUN THE TASK WHEN YOU START THE SERVER (2 ways):
Edit your manage.py ,add your function call at the line right behind the import of manage.py.
#!/usr/bin/env python
import os
import sys
print('Interesting things happens.') # THIS IS WHERE YOU RUN YOUR tasks.checkdata.
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'OTAKU.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
However , maybe you want to leave the manage.py alone. Create a new app in your project, with views.py and models.py.(Remember to add this app to your INSTALLED_APPS in settings.py)
In views.py :
print("The view is loading.")# THIS IS WHERE YOU CAN RUN YOUR tasks.checkdata.
In models.py:
print("The models is loading.")# THIS IS WHERE YOU CAN RUN YOUR tasks.checkdata.
And when you run your server now, this is what you might see:
Interesting things happens.
The models is loading.
Interesting things happens.
The models is loading.
Performing system checks...
The view is preparing itself.
System check identified no issues (0 silenced).
July 15, 2019 - 11:52:03
Django version 2.1.3, using settings 'rayos.settings'
Starting development server at http://0.0.0.0:9988/
Quit the server with CONTROL-C.
Replace the print part of the script with your data checking functions and it will work.
I am just starting to learn about Django and have just discovered celery to run async background tasks.
I have a dummy project which I pilfered off the internet with a sample task as follows:
from djcelery import celery
from time import sleep
#celery.task
def sleeptask(i):
sleep(i)
return i
Now in my view, I have the following:
def test_celery(request):
result = tasks.sleeptask.delay(10)
return HttpResponse(result.task_id)
This runs fine and when I point the browser to it, I get some random string like 93463e9e-d8f5-46b2-8544-8d4b70108b0d which I am guessing is the task id.
However, when I do this:
def test_celery(request):
result = tasks.sleeptask.delay(10)
return HttpResponse(result.get())
The web browser goes in a loop with the message "Connecting..." and never returns. I was under the impression, this will block till the task is run and give the result but that does not seem to be the case. What am I doing wrong?
Another question is the way I am doing it, is it going to run it asynchronously i.e. not block while the task is running?
EDIT
In my settings.py file I have:
import djcelery
# Setup celery
djcelery.setup_loader()
BROKER_URL = 'redis://localhost:6379/0'
On the Django side, I do not get any errors:
System check identified no issues (0 silenced).
September 27, 2016 - 18:13:12
Django version 1.9.5, using settings 'myproject.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Thanks to the hints in the comments, I finally was able to solve the problem. I had to add the following:
CELERY_IMPORTS('myproject.tasks') to my settings.py file.
I also needed to run the worker as:
python manage.py celery worker
I'm working on a Django site, using the Django 1.4 official release. My site has a few apps. One of the apps has a model named Campaign with FKs to models in the other apps. As suggested in the Django reference (https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey), I chose to define the FK fields using a string instead of the related model classes themselves since I expect to have circular references in the next version or so, and this approach avoids circular import issues.
When I deployed the site on AWS (Amazon Web Services) using the BitNami djangostack 1.4 (Apache, mod_wsgi, MySQL), my deployed site worked correctly for the most part. On pages displaying forms for the Campaign model, Django raised an exception when trying to create a form field relying on a foreign key field of the Campaign model, complaining the related model was not loaded. The funny/scary thing is that when I set settings.DEBUG to True (definitely not something we'll want once the site goes live), the issue no longer occurs!
The site worked perfectly when I tested it on my local Django development server, and it also worked perfectly using the same BitNami djangostack deployed on my Windows workstation.
Here's the relevant Apache error output I see on the AWS console.
[error] ERROR :: Internal Server Error: /campaigns/
[error] Traceback (most recent call last):
[error] File "/opt/bitnami/apps/django/lib/python2.6/site-packages/django/core/handlers/base.py", line 101, in get_response
[error] request.path_info)
... (django/wsgi blah blah)
[error] File "/opt/bitnami/apps/django/django_projects/Project/campaign/views.py", line 5, in <module>
[error] from forms import CampaignForm
[error] File "/opt/bitnami/apps/django/django_projects/Project/campaign/forms.py", line 12, in <module>
[error] class CampaignForm(forms.ModelForm):
[error] File "/opt/bitnami/apps/django/lib/python2.6/site-packages/django/forms/models.py", line 206, in __new__
[error] opts.exclude, opts.widgets, formfield_callback)
[error] File "/opt/bitnami/apps/django/lib/python2.6/site-packages/django/forms/models.py", line 160, in fields_for_model
[error] formfield = f.formfield(**kwargs)
[error] File "/opt/bitnami/apps/django/lib/python2.6/site-packages/django/db/models/fields/related.py", line 1002, in formfield
[error] (self.name, self.rel.to))
[error] ValueError: Cannot create form field for 'reward' yet, because its related model 'reward.Reward' has not been loaded yet
So, here's a quick recap:
My site works on my local Django development server, regardless of settings.DEBUG value
With the BitNami stack on my local Windows machine, it works, regardless of settings.DEBUG value
With the BitNami stack on AWS (Ubuntu), it works with DEBUG = True but not with DEBUG = False
I understand what the error means, but I don't understand why it's occurring, i.e. why the dependent model is not loaded. Has anyone ever encountered a similar issue, or has advice that could help me fix it?
Note: I tried to Google the error message but all I found was the Django source code where that error was raised. I also tried searching for more general queries like mod_wsgi django related model, but I couldn't find anything that seemed relevant to my problem.
Alright, I found a solution to my problem on a blog post by Graham Dumpleton (http://blog.dscpl.com.au/2010/03/improved-wsgi-script-for-use-with.html).
In short, the Django development server validates the models (which resolves string-based relations) when starting, and that operation probably wasn't done when using mod_wsgi under the BitNami djangostack on Ubuntu with DEBUG = False. Very specific conditions, I know - but G. Dumpleton's 'fixed' mod_wsgi code solved the issue for me.
This is what my wsgi.py looks like now:
#wsgi.py
#make sure the folder containing the apps and the site is at the front of sys.path
#maybe we could just define WSGIPythonPath in django.conf file...?
project_root = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
if project_root not in sys.path:
sys.path.insert(0, project_root)
#set the DJANGO_SETTINGS_MODULE environment variable (doesn't work without this, despite what G. Dumpleton's blog post said)
site_name = os.path.basename(os.path.dirname(__file__))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "%s.settings" % site_name)
#new code - fixes inter-app model dependencies
from my_site import settings
import django.core.management
django.core.management.setup_environ(settings) # mimic manage.py
utility = django.core.management.ManagementUtility()
command = utility.fetch_command('runserver')
command.validate() # validate the models - *THIS* is what was missing
#setup WSGI application object
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
I had just this problem and was looking at modifying wsgi.py as suggested. Then a colleague suggested simply reordering the apps in INSTALLED_APPS and, hey presto, it's working without having to touch the wsgi.py.