I want to run a Django - Celery task with manual transaction management, but it seems that the annotations do not stack.
e.g.
def ping():
print 'ping'
pong.delay('arg')
#task(ignore_result=True)
#transaction.commit_manually()
def pong(arg):
print 'pong: %s' % arg
transaction.rollback()
results in
TypeError: pong() got an unexpected keyword argument 'task_name'
while the reverse annotation order results in
---> 22 pong.delay('arg')
AttributeError: 'function' object has no attribute 'delay'
It makes sense, but I'm having trouble finding a nice workaround. The Django docs don't mention alternatives to the annotation, and I don't want to make a class for each celery Task when I don't need one.
Any ideas?
Previously Celery had some magic where a set of default keyword arguments
were passed to the task if it accepted them.
Since version 2.2 you can disable this behaviour, but the easiest is to
import the task decorator from celery.task instead of celery.decorators:
from celery.task import task
#task
#transaction.commit_manually
def t():
pass
The decorators module is deprecated and will be completely removed in 3.0,
and the same for the "magic keyword arguments"
Note:
For custom Task classes you should set the accept_magic_kwargs attribute to False:
class MyTask(Task):
accept_magic_kwargs = False
Note2: Make sure your custom decorators preserves the name of the function using functools.wraps, otherwise the task will end up with the wrong name.
The task decorator generates a class x(Task) from your function with the run method as your target. Suggest you define the class and decorate the method.
Untested e.g.:
class pong(Task):
ignore_result = True
#transaction.commit_manually()
def run(self,arg,**kwargs):
print 'pong: %s' % arg
transaction.rollback()
Related
I want to build a decorator for my test functions which has several uses. One of them is helping to add properties to the generated junitxml.
I know there's a fixture built-in pytest for this called record_property that does exactly that. How can I use this fixture inside my decorator?
def my_decorator(arg1):
def test_decorator(func):
def func_wrapper():
# hopefully somehow use record_property with arg1 here
# do some other logic here
return func()
return func_wrapper
return test_decorator
#my_decorator('some_argument')
def test_this():
pass # do actual assertions etc.
I know I can pass the fixture directly into every test function and use it in the tests, but I have a lot of tests and it seems extremely redundant to do this.
Also, I know I can use conftest.py and create a custom marker and call it in the decorator, but I have a lot of conftest.py files and I don't manage all of them alone so I can't enforce it.
Lastly, trying to import the fixture directly in to my decorator module and then using it results in an error - so that's a no go also.
Thanks for the help
It's a bit late but I came across the same problem in our code base. I could find a solution to it but it is rather hacky, so I wouldn't give a guarantee that it works with older versions or will prevail in the future.
Hence I asked if there is a better solution. You can check it out here: How to use pytest fixtures in a decorator without having it as argument on the decorated function
The idea is to basically register the test functions which are decorated and then trick pytest into thinking they would require the fixture in their argument list:
class RegisterTestData:
# global testdata registry
testdata_identifier_map = {} # Dict[str, List[str]]
def __init__(self, testdata_identifier, direct_import = True):
self.testdata_identifier = testdata_identifier
self.direct_import = direct_import
self._always_pass_my_import_fixture = False
def __call__(self, func):
if func.__name__ in RegisterTestData.testdata_identifier_map:
RegisterTestData.testdata_identifier_map[func.__name__].append(self.testdata_identifier)
else:
RegisterTestData.testdata_identifier_map[func.__name__] = [self.testdata_identifier]
# We need to know if we decorate the original function, or if it was already
# decorated with another RegisterTestData decorator. This is necessary to
# determine if the direct_import fixture needs to be passed down or not
if getattr(func, "_decorated_with_register_testdata", False):
self._always_pass_my_import_fixture = True
setattr(func, "_decorated_with_register_testdata", True)
#functools.wraps(func)
#pytest.mark.usefixtures("my_import_fixture") # register the fixture to the test in case it doesn't have it as argument
def wrapper(*args: Any, my_import_fixture, **kwargs: Any):
# Because of the signature of the wrapper, my_import_fixture is not part
# of the kwargs which is passed to the decorated function. In case the
# decorated function has my_import_fixture in the signature we need to pack
# it back into the **kwargs. This is always and especially true for the
# wrapper itself even if the decorated function does not have
# my_import_fixture in its signature
if self._always_pass_my_import_fixture or any(
"my_import_fixture" in p.name for p in signature(func).parameters.values()
):
kwargs["my_import_fixture"] = my_import_fixture
if self.direct_import:
my_import_fixture.import_all()
return func(*args, **kwargs)
return wrapper
def pytest_collection_modifyitems(config: Config, items: List[Item]) -> None:
for item in items:
if item.name in RegisterTestData.testdata_identifier_map and "my_import_fixture" not in item._fixtureinfo.argnames:
# Hack to trick pytest into thinking the my_import_fixture is part of the argument list of the original function
# Only works because of #pytest.mark.usefixtures("my_import_fixture") in the decorator
item._fixtureinfo.argnames = item._fixtureinfo.argnames + ("my_import_fixture",)
I have a task foobar:
#app.task(bind=True)
def foobar(self, owner, a, b):
if already_working(owner): # check if a foobar task is already running for owner.
register_myself(self.request.id, owner) # add myself in the DB.
return a + b
How can I mock the self.request.id attribute? I am already patching everything and calling directly the task rather than using .delay/.apply_async, but the value of self.request.id seems to be None (as I am doing real interactions with DB, it is making the test fail, etc…).
For the reference, I'm using Django as a framework, but I think that this problem is just the same, no matter the environment you're using.
Disclaimer: Well, I do not think it was documented somewhere and this answer might be implementation-dependent.
Celery wraps his tasks into celery.Task instances, I do not know if it swaps the celery.Task.run method by the user task function or whatever.
But, when you call a task directly, you call __call__ and it'll push a context which will contain the task ID, etc…
So the idea is to bypass __call__ and Celery usual workings, first:
we push a controlled task ID: foobar.push_request(id=1) for example.
then, we call the run method: foobar.run(*args, **kwargs).
Example:
#app.task(bind=True)
def foobar(self, name):
print(name)
return foobar.utils.polling(self.request.id)
#patch('foobar.utils.polling')
def test_foobar(mock_polling):
foobar.push_request(id=1)
mock_polling.return_value = "done"
assert foobar.run("test") == "done"
mock_polling.assert_called_once_with(1)
You can call the task synchronously using
task = foobar.s(<args>).apply()
This will assign a unique task ID, so the value will not be None and your code will run. Then you can check the results as part of your test.
There is probably a way to do this with patch, but I could not work out a way to assign a property. The most straightforward way is to just mock self.
tasks.py:
#app.task(name='my_task')
def my_task(self, *args, **kwargs):
*__do some thing__*
test_tasks.py:
from mock import Mock
def test_my_task():
self = Mock()
self.request.id = 'ci_test'
my_task(self)
I am currently developing a Django application based on django-tenants-schema. You don't need to look into the actual code of the module, but the idea is that it has a global setting for the current database connection defining which schema to use for the application tenant, e.g.
tenant = tenants_schema.get_tenant()
And for setting
tenants_schema.set_tenant(xxx)
For some of the tasks I would like them to remember the current global tenant selected during the instantiation, e.g. in theory:
class AbstractTask(Task):
'''
Run this method before returning the task future
'''
def before_submit(self):
self.run_args['tenant'] = tenants_schema.get_tenant()
'''
This method is run before related .run() task method
'''
def before_run(self):
tenants_schema.set_tenant(self.run_args['tenant'])
Is there an elegant way of doing it in celery?
Celery (as of 3.1) has signals you can hook into to do this. You can alter the kwargs that were passed in, and on the other side, undo your alterations before they're given to the actual task:
from celery import shared_task
from celery.signals import before_task_publish, task_prerun, task_postrun
from threading import local
current_tenant = local()
#before_task_publish.connect
def add_tenant_to_task(body=None, **unused):
body['kwargs']['tenant_middleware.tenant'] = getattr(current_tenant, 'id', None)
print 'sending tenant: {t}'.format(t=current_tenant.id)
#task_prerun.connect
def extract_tenant_from_task(kwargs=None, **unused):
tenant_id = kwargs.pop('tenant_middleware.tenant', None)
current_tenant.id = tenant_id
print 'current_tenant.id set to {t}'.format(t=tenant_id)
#task_postrun.connect
def cleanup_tenant(**kwargs):
current_tenant.id = None
print 'cleaned current_tenant.id'
#shared_task
def get_current_tenant():
# Here is where you would do work that relied on current_tenant.id being set.
import time
time.sleep(1)
return current_tenant.id
And if you run the task (not showing logging from the worker):
In [1]: current_tenant.id = 1234; ct = get_current_tenant.delay(); current_tenant.id = 5678; ct.get()
sending tenant: 1234
Out[1]: 1234
In [2]: current_tenant.id
Out[2]: 5678
The signals are not called if no message is sent (when you call the task function directly, without delay() or apply_async()). If you want to filter on the task name, it is available as body['task'] in the before_task_publish signal handler, and the task object itself is available in the task_prerun and task_postrun handlers.
I am a Celery newbie, so I can't really tell if this is the "blessed" way of doing "middleware"-type stuff in Celery, but I think it will work for me.
I'm not sure what you mean here, is before_submit executed before the task is called by a client?
In that case I would rather use a with statement here:
from contextlib import contextmanager
#contextmanager
def set_tenant_db(tenant):
prev_tenant = tenants_schema.get_tenant()
try:
tenants_scheme.set_tenant(tenant)
yield
finally:
tenants_schema.set_tenant(prev_tenant)
#app.task
def tenant_task(tenant=None):
with set_tenant_db(tenant):
do_actions_here()
tenant_task.delay(tenant=tenants_scheme.get_tenant())
You can of course create a base task that does this automatically,
you can apply the context in Task.__call__ for example, but I'm not sure
if that saves you much if you can just use the with statement explicitly.
I am using python 2.7 and django 1.27, i am using celery for tasks.
I have this view
def my_view(request):
do_stuff()
local_1 = 1
local_2 = 4
celery_delayed_task(locals())
return HttpResponse('OK')
this resulted in this excpetion
Passing locals() fails: a class that defines slots without defining getstate cannot be pickled
so i thought maybe i need to create a copy of the locals() dictionary since the task will be called when the view does not exist any more.
i try this instead:
def my_view(request):
do_stuff()
local_1 = 1
local_2 = 4
locals_dict = copy.deepcopy(locals())
celery_delayed_task(locals_dict)
return HttpResponse('OK')
and now i got this error:
Deepcopy of object fails: object.new(cStringIO.StringO) is not safe, use cStringIO.StringO.new()
obviously i am doing it wrong, any thoughts ?
The task arguments must be serialized.
Celery uses the Python pickle protocol by default, but also supports json, yaml, msgpack or custom serializers.
The objects you are trying to send cannot be pickled. There's a chance you could make them pickleable, but in the end -- passing locals as task arguments is not a good practice.
See: http://docs.python.org/library/pickle.html
def pre_save(self, model_instance, add):
value = super(MediaUploadField, self).pre_save(model_instance, add)
if value and add:
post_save.connect(self.someCallback, sender=model_instance.__class__, dispatch_uid='media_attachment_signal')
return value
def someCallback(sender, **kwargs):
print "callback"
print sender
return
Is throwing the following error:
someCallback() got multiple values for keyword argument 'sender'
I honestly can't work out what I'm doing wrong, I followed the documentation precisely. I tried replacing model_instance.class with the actual class import but it throws the same error.
Does any have any idea whats wrong with my code?
It seems that someCallback is a model method. The first argument to model methods is always the instance itself - which is usually referenced as self. But you've called the first argument sender - so Python is trying to receive sender both as the first positional argument, and as one of the keyword arguments.
The best way to solve this is to define someCallback as a staticmethod, as these don't take the instance or class:
#staticmethod
def someCallback(sender, **kwargs):
Also note that connecting your post_save handler in a pre_save method is a very strange thing to do. Don't forget that connecting a signal is a permanent thing - it's not something that's done on a per-call basis.