_store_result() got an unexpected keyword argument 'request' in celery - django

I have no idea why celery suddenly stopped working, probably missing some settings but I don't think so .... and code of views that runs celery tasks is the same, sentry only shows as follows:
It works locally (develop env) but in production it doesn't, I'm using django 1.4.2, djcelery 3.0.11 and celery 3.1.9, what do you think happening?

I've run into the same problem, but but part of an upgrade (among others):
Django >=1.5,<1.6 -> >=1.6,<1.7
celery <3.1 -> >=3.1.17
django-celery <3.1 -> <=3.1.16
I can see you opened an issue on Celery's Github project. However, The request keyword argument was added to celery a long time ago, and it seems to be still present in the master branch.
This is the logic in a base back-end class, but if the implementation you're using doesn't have this keyword argument, it'll crash. In your case, it looks like the versions of celery and django-celery are incompatible.
The commit to bring support to 3.1 is only available in following versions of django-celery: v3.1.16 v3.1.15 v3.1.10 v3.1.9 v3.1.1 v3.1.0, I suggest upgrading to one of these.

Related

Can you daemonize Celery through your django site?

Reading the daemonization Celery documentation, if you're running Celery on Linux with systemd, you set it up with two files:
/etc/systemd/system/celery.service
/etc/conf.d/celery
I'm using Celery in a Django site with django-celery-beat, and the documentation is a little confusing on this point:
Example Django configuration
Django users now uses [sic] the exact same template as above, but make sure that the module that defines your Celery app instance also sets a default value for DJANGO_SETTINGS_MODULE as shown in the example Django project in First steps with Django.
The docs don't just come out and say, put your daemonization settings in settings.py and it will all work out, bla, bla. From another SO posts this user seems to have run into the same confusion where Django instructions imply you use init.d method.
Bonus point if you can answer if it's possible to run Celery and RabbitMQ both configured and with the Django instance (if that makes sense).
I'm thinking not Celery if only because daemon variables include CELERYD_ and first steps with django say: "...all Celery configuration options must be specified in uppercase instead of lowercase, and start with CELERY_"

Error: cannot import name RemovedInDjango110Warning in django 1.9.5

So this code has been running for like a week now, today it is throwing this error. And this is not happening at the URL level, which many places seem to say.
I am using celery, djcelery and Django 1.9.5. In my celery task, in one part where I am trying to connect to my DB, it is throwing this error. The strange part is when I run the code line by line in shell, it works.
All this code runs inside a virtualenv being used by two projects which have exactly same requirements. to confirm, I just checked the django version in pip. It is 1.9.5
Please let me know if any extra info is required.

Celerybeat not recognizing new model even though django app does

When starting celerybeat I get the following error:
Restarting celery periodic task scheduler
Stopping celerybeat... NOT RUNNING
Starting celerybeat...
Error: One or more models did not validate:
collections.collection: 'language' has a relation with model <class 'languages.models.Language'>, which has either not been installed or is abstract.
collections.translation: 'language' has a relation with model <class 'languages.models.Language'>, which has either not been installed or is abstract.
But the model languages has for sure been added to my django settings, uwsgi and celery starts up fine and everything else but celerybeat work as it should.
It's as if celerybeat works of an old settings file, but that should not be possible or is it? I have recently also moved my settings file.
Found the problem. I had earlier moved my settings files, but not changed this in the celery settings files. So solution was to find the files:
celeryd
celerybeat
in etc/default/
and change the path to where the settings files has been moved to.
sudo nano celeryd
and edit

Lots of socket errors with celery eventlet tasks

I'm getting a lot of "IOError: Socket closed" exceptions from amqplib.client_0_8.method_framing.read_method when running my celery workers with the --pool=eventlet option. I'm also seeing a lot of timeout exceptions from eventlet.hubs.hub.switch.
I'm using an async_manage.py script similar to the one at https://gist.github.com/821848, running the works like:
./async_manage.py celeryd_detach -E --pool=eventlet --concurrency=120 --logfile=<path>
Is this a known issue, or is there something wrong with my configuration or setup?
I'm running djcelery 2.2.4, Django 1.3, and eventlet 0.9.15.
The problem was a side effect of some code that was blocking. I managed to detect the blocking code using the eventlet option described in this article.
There were 2 places where blocking was occuring: DNS lookups, and MySQL database access. I managed to resolve the first by installing the dnspython package, and the second my using the undocumented MySQLdb option in eventlet:
import eventlet
eventlet.monkey_patch()
eventlet.monkey_patch(MySQLdb=True)

Does django's runserver option provide a hook for running other restart scripts?

I've recently been playing around with django and celery. One annoying thing during development is the fact that I have to restart the celery daemon each time I modify a task. When I'm developing, I usually like to use 'manage.py runserver' which automatically reloads the django framework on modifications to my apps.
Is there a way to add a hook to the reloading process that runserver does so that it automatically restarts the celery daemon I have running?
Alternatively, does celery have a similar monitor-and-reload-on-change mode that I should be using for development?
Django-supervisor works very well for this purpose. You can have it start the Django server, Celery, and anything else you need, and have different configurations for development and production servers. It also knows to reload the celery daemon when your code changes.
https://github.com/rfk/django-supervisor
I believe you can set CELERY_ALWAYS_EAGER to true.
Yes. Django provides auto reload hook, which can be used to restart other scripts.
Here is a simple management command which prints a message on reload
import subprocess
from django.core.management.base import BaseCommand
from django.utils import autoreload
def reload():
print('Code changed. Auto reloading...')
class Command(BaseCommand):
def handle(self, *args, **options):
autoreload.main(reload)
Now you can save to a reload.py and run it with python manage.py reload. A management command to reload celery workers is available here.
Celery didn't have any feature for reload code or for auto restart when the code change, than you have to restart it manually.
There isn't a way for add an hook, and I think not worthwhile of edit the source code of django just for perform a restart.
Personally while I'm developing i prefere to see the output shell of celery that is decorated with color instead of tail the logs, is more readable.
Celery 2.5 has an experimental runtime option --autoreload that could be used for this purpose, too. Here's more detail in the release notes. That being said, I think django-supervisor (via #Lee Semel) looks like the better way of doing things. I thought I would post this alternative here in case other readers do not want to have to configure another app for asynchronous processing.