Flask-Dramatiq-Callback must be an Actor - flask

When working with dramatiq 1.9.0 (flask-dramatiq 0.6.0) I'm unable to call on_success- or on_failure-callbacks. The official dramatiq-documentation states callbacks can be used like this:
#dramatiq.actor
def taskFailed(message_data, exception_data):
print("Task failed")
#dramatiq.actor
def taskSucceeded(message_data, result):
print("Success")
dramatiqTask.send_with_options(args=(1, 2, 3), on_success=taskSucceeded, on_failure=taskFailed)
However, I'm getting the following error:
ERROR - on_failure value must be an Actor
In .../site-packages/dramatiq/actor.py there is
def message_with_options(self, *, args=None, kwargs=None, **options):
for name in ["on_failure", "on_success"]:
callback = options.get(name)
print(str(type(callback))) # Returns "<class 'flask_dramatiq.LazyActor'>"
if isinstance(callback, Actor):
options[name] = callback.actor_name
elif not isinstance(callback, (type(None), str)):
raise TypeError(name + " value must be an Actor")
which shows that the callback isn't from the type Actor but flask-dramatiqs LazyActor.
If I import the original package with import dramatiq as _dramatiq and change the decorator to _dramatiq.actor, nothing happens at all. The task won't start.
How do I define callbacks in flask-dramatiq?

Related

AssertionError on unit testing a celery task with autoretry, backoff and jitter

Using celery 4.3.0. I tried to write a unit test for the following task.
from django.core.exceptions import ObjectDoesNotExist
#shared_task(autoretry_for=(ObjectDoesNotExist,), max_retries=5, retry_backoff=10)
def process_something(data):
product = Product()
product.process(data)
Unit test:
#mock.patch('proj.tasks.Product')
#mock.patch('proj.tasks.process_something.retry')
def test_process_something_retry_failed_task(self, process_something_retry, mock_product):
mock_object = mock.MagicMock()
mock_product.return_value = mock_object
mock_object.process.side_effect = error = ObjectDoesNotExist()
with pytest.raises(ObjectDoesNotExist):
process_something(self.data)
process_something_retry.assert_called_with(exc=error)
This is the error I get after running the test:
#wraps(task.run)
def run(*args, **kwargs):
try:
return task._orig_run(*args, **kwargs)
except autoretry_for as exc:
if retry_backoff:
retry_kwargs['countdown'] = \
get_exponential_backoff_interval(
factor=retry_backoff,
retries=task.request.retries,
maximum=retry_backoff_max,
full_jitter=retry_jitter)
> raise task.retry(exc=exc, **retry_kwargs)
E TypeError: exceptions must derive from BaseException
I understand it is because of the exception. I replaced ObjectDoesNotExist everywhere with Exception instead. After running the test, I get this error:
def assert_called_with(self, /, *args, **kwargs):
"""assert that the last call was made with the specified arguments.
Raises an AssertionError if the args and keyword args passed in are
different to the last call to the mock."""
if self.call_args is None:
expected = self._format_mock_call_signature(args, kwargs)
actual = 'not called.'
error_message = ('expected call not found.\nExpected: %s\nActual: %s'
% (expected, actual))
raise AssertionError(error_message)
def _error_message():
msg = self._format_mock_failure_message(args, kwargs)
return msg
expected = self._call_matcher((args, kwargs))
actual = self._call_matcher(self.call_args)
if expected != actual:
cause = expected if isinstance(expected, Exception) else None
> raise AssertionError(_error_message()) from cause
E AssertionError: expected call not found.
E Expected: retry(exc=Exception())
E Actual: retry(exc=Exception(), countdown=7)
Please let me know how I can fix both the errors.
I had the similar issue, while I was working on tests to ensure that the celery retry logic was covering my specific scenarios. What worked for me was to use explicit retry instead of the autoretry_for parameter.
I have adjusted your code to my solution. Although my solution didn't use
shared_task I think It should work likewise. Tested on celery==5.1.2
task:
from django.core.exceptions import ObjectDoesNotExist
#shared_task(bind=True, max_retries=5, retry_backoff=10)
def process_something(self, data):
try:
product = Product()
product.process(data)
except ObjectDoesNotExist as exc:
raise self.retry(exc=exc)
test:
from proj.tasks import Product # I assume the Product class is located here
from django.core.exceptions import ObjectDoesNotExist
import celery
#mock.patch.object(Product, "__init__", Mock(return_value=None)) # just mocking the init method
#mock.patch.object(Product, "process")
#mock.patch('proj.tasks.process_something.retry')
def test_process_something_retry_failed_task(self, retry_mock, process_mock):
exc = ObjectDoesNotExist()
process_mock.side_effect = exc
retry_mock.side_effect = celery.exceptions.Retry
with pytest.raises(celery.exceptions.Retry):
process_something(self.data)
retry_mock.assert_called_with(exc=exc)
In my problem I also was using custom exceptions. With this solution I didnt need change the type of my exceptions.

Tornado on pika consumer can't run

I want to build monitoring system using RabbitMQ and Tornado. I can run the producer and my consumer can consume the data on queue but the data cant be show on website.
This just my experiment before I using the sensor
import pika
import tornado.ioloop
import tornado.web
import tornado.websocket
import logging
from threading import Thread
logging.basicConfig(lvl=logging.INFO)
clients=[]
credentials = pika.credentials.PlainCredentials('ayub','ayub')
connection = pika.BlockingConnection(pika.ConnectionParameters('192.168.43.101',
5672,
'/',
credentials))
channel = connection.channel()
def threaded_rmq():
channel.basic_consume('Queue',
on_message_callback= consumer_callback,
auto_ack=True,
exclusive=False,
consumer_tag=None,
arguments=None)
channel.start_consuming()
def disconect_rmq():
channel.stop_consuming()
Connection.close()
logging.info('Disconnected from broker')
def consumer_callback(ch,method,properties,body):
for itm in clients:
itm.write_message(body)
class SocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
logging.info('websocket open')
clients.remove(self)
def close(self):
logging.info('websocket closed')
clients.remove(self)
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render("websocket.html")
application = tornado.web.Application([
(r'/ws',SocketHandler),
(r"/", MainHandler),
])
def startTornado():
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
def stopTornado():
tornado.ioloop.IOLoop.instance().stop()
if __name__ == "__main__":
logging.info('starting thread RMQ')
threadRMQ = Thread(target=threaded_rmq)
threadRMQ.start()
logging.info('starting thread tornado')
threadTornado = Thread(target=startTornado)
threadTornado.start()
try:
raw_input("server ready")
except SyntaxError:
pass
try:
logging.info('disconnected')
disconnect_rmq()
except Exception, e:
pass
stopTornado()
but I got this error
WARNING:tornado.access:404 GET /favicon.ico (192.168.43.10) 0.98ms
please help me
In your SocketHandler.open function you need to add the client not remove it.
Also consider using a set for clients instead of a list because the remove operation will be faster:
clients = set()
...
class SocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
logging.info('websocket open')
clients.add(self)
def close(self):
logging.info('websocket closed')
clients.remove(self)
The message you get regarding favicon.ico is actually a warning and it's harmless (the browser is requesting an icon to show for web application but won't complain if none is available).
You might also run into threading issues because Tornado and Pika are running in different threads so you will have to synchronize them; you can use Tornado's IOLoop.add_callback method for that.

custom subroutine for WebDriverWait.until throws an error

I am testing an angular web-app using selenium and python. For my test, I am setting up some data using API calls. Now I want to wait until the data shows up in the front-end before proceeding with my test. We currently have a 60 second wait to overcome this problem; however, I was hoping for a smarter wait and wrote the following code:
def wait_for_plan_to_appear(self,driver,plan_locator):
plan_name_element = UNDEF
try:
self.navigateToPlanPage()
plan_name_element = driver.find_element_by_xpath(plan_locator)
except NoSuchElementException:
pass
return plan_name_element
def find_plan_name_element(self,plan_id):
plan_locator = '//*[#data-hraf-id="'+plan_id+'-plan-name"]'
plan_name_element = UNDEF
try:
plan_name_element = WebDriverWait(self.driver,60,2).until(self.wait_for_plan_to_appear(self.driver,plan_locator))
except TimeoutException:
self.logger.debug("Could not find the plan with plan_id = "+plan_id)
return plan_name_element
In my test script, I am calling:
self.find_plan_name_element('e7fa25a5-0b39-4a97-b99f-44c48439ce99') # the long string is the plan-id
However, when I run this code - i get following error:
error: 'int' object is not callable"
If I change the wait_for_plan_to_appear such that it returns a boolean, it throws error:
error: 'bool' object is not callable"
Has someone seen/resolved this in their work ? Thanks
I would use "...".format() to automatically convert the plan_id to a string.
Moreover you could simplify the waiter by using an expected condition :
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import NoSuchElementException, StaleElementReferenceException
class element_loaded_and_displayed(object):
""" An expectation for checking that an element is present on the DOM of a
page and visible. Refreshes the page if the element is not present.
returns the WebElement once it is located and visible.
"""
def __init__(self, locator):
self.locator = locator
def __call__(self, driver):
try:
element = driver.find_element(*self.locator)
return element if element.is_displayed() else False
except StaleElementReferenceException:
return False
except NoSuchElementException as ex:
driver.refresh()
raise ex
def find_plan_name_element(self, plan_id):
plan_locator = (By.CSS_SELECTOR, "[data-hraf-id='{0}-plan-name']".format(plan_id))
err_message = "Could not find the plan with plan_id = {0}".format(plan_id)
wait = WebDriverWait(self.driver, timeout=60, poll_frequency=2)
return wait.until(element_loaded_and_displayed(plan_locator), err_message)

How to get the "full" async result in Celery link_error callback

I have Celery 3.1.18 running with Django 1.6.11 and RabbitMQ 3.5.4, and trying to test my async task in a failure state (CELERY_ALWAYS_EAGER=True). However, I cannot get the proper "result" in the error callback. The example in the Celery docs shows:
#app.task(bind=True)
def error_handler(self, uuid):
result = self.app.AsyncResult(uuid)
print('Task {0} raised exception: {1!r}\n{2!r}'.format(
uuid, result.result, result.traceback))
When I do this, my result is still "PENDING", result.result = '', and result.traceback=''. But the actual result returned by my .apply_async call has the right "FAILURE" state and traceback.
My code (basically a Django Rest Framework RESTful endpoint that parses a .tar.gz file, and then sends a notification back to the user, when the file is done parsing):
views.py:
from producer_main.celery import app as celery_app
#celery_app.task()
def _upload_error_simple(uuid):
print uuid
result = celery_app.AsyncResult(uuid)
print result.backend
print result.state
print result.result
print result.traceback
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
class UploadNewFile(APIView):
def post(self, request, repository_id, format=None):
try:
uploaded_file = self.data['files'][self.data['files'].keys()[0]]
self.path = default_storage.save('{0}/{1}'.format(settings.MEDIA_ROOT,
uploaded_file.name),
uploaded_file)
print type(import_file)
self.async_result = import_file.apply_async((self.path, request.user),
link_error=_upload_error_simple.s())
print 'results from self.async_result:'
print self.async_result.id
print self.async_result.backend
print self.async_result.state
print self.async_result.result
print self.async_result.traceback
return Response()
except (PermissionDenied, InvalidArgument, NotFound, KeyError) as ex:
gutils.handle_exceptions(ex)
tasks.py:
from producer_main.celery import app
from utilities.general import upload_class
#app.task
def import_file(path, user):
"""Asynchronously import a course."""
upload_class(path, user)
celery.py:
"""
As described in
http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html
"""
from __future__ import absolute_import
import os
import logging
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'producer_main.settings')
from django.conf import settings
log = logging.getLogger(__name__)
app = Celery('producer') # pylint: disable=invalid-name
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) # pragma: no cover
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
My backend is configured as such:
CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = False
BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'redis://localhost'
CELERY_RESULT_PERSISTENT = True
CELERY_IGNORE_RESULT = False
When I run my unittest for the link_error state, I get:
Creating test database for alias 'default'...
<class 'celery.local.PromiseProxy'>
130ccf13-c2a0-4bde-8d49-e17eeb1b0115
<celery.backends.redis.RedisBackend object at 0x10aa2e110>
PENDING
None
None
results from self.async_result:
130ccf13-c2a0-4bde-8d49-e17eeb1b0115
None
FAILURE
Non .zip / .tar.gz file passed in.
Traceback (most recent call last):
So the task results are not available in my _upload_error_simple() method, but they are available from the self.async_result returned variable...
I could not get the link and link_error callbacks to work, so I finally had to use the on_failure and on_success task methods described in the docs and this SO question. My tasks.py then looks like:
class ErrorHandlingTask(Task):
abstract = True
def on_failure(self, exc, task_id, targs, tkwargs, einfo):
msg = 'Import of {0} raised exception: {1!r}'.format(targs[0].split('/')[-1],
str(exc))
def on_success(self, retval, task_id, targs, tkwargs):
msg = "Upload successful. You may now view your course."
#app.task(base=ErrorHandlingTask)
def import_file(path, user):
"""Asynchronously import a course."""
upload_class(path, user)
You appear to have _upload_error() as a bound method of your class - this is probably not what you want. try making it a stand-along task:
#celery_app.task(bind=True)
def _upload_error(self, uuid):
result = celery_app.AsyncResult(uuid)
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
class Whatever(object):
....
self.async_result = import_file.apply_async((self.path, request.user),
link=self._upload_success.s(
"Upload finished."),
link_error=_upload_error.s())
in fact there's no need for the self paramater since it's not used so you could just do this:
#celery_app.task()
def _upload_error(uuid):
result = celery_app.AsyncResult(uuid)
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
note the absence of bind=True and self
Be careful with UUID instance!
If you will try to get status of a task with id not string type but UUID type, you will only get PENDING status.
from uuid import UUID
from celery.result import AsyncResult
task_id = UUID('d4337c01-4402-48e9-9e9c-6e9919d5e282')
print(AsyncResult(task_id).state)
# PENDING
print(AsyncResult(str(task_id)).state)
# SUCCESS

Celery Task the difference between these two tasks below

What's the difference between these two tasks below?
The first one gives an error, the second one runs just fine. Both are the same, they accept extra arguments and they are both called in the same way.
ProcessRequests.delay(batch) **error object.__new__() takes no parameters**
SendMessage.delay(message.pk, self.pk) **works!!!!**
Now, I have been made aware of what the error means, but my confusion is why one works and not the other.
Tasks...
1)
class ProcessRequests(Task):
name = "Request to Process"
max_retries = 1
default_retry_delay = 3
def run(self, batch):
#do something
2)
class SendMessage(Task):
name = "Sending SMS"
max_retries = 10
default_retry_delay = 3
def run(self, message_id, gateway_id=None, **kwargs):
#do something
Full Task Code....
from celery.task import Task
from celery.decorators import task
import logging
from sms.models import Message, Gateway, Batch
from contacts.models import Contact
from accounts.models import Transaction, Account
class SendMessage(Task):
name = "Sending SMS"
max_retries = 10
default_retry_delay = 3
def run(self, message_id, gateway_id=None, **kwargs):
logging.debug("About to send a message.")
# Because we don't always have control over transactions
# in our calling code, we will retry up to 10 times, every 3
# seconds, in order to try to allow for the commit to the database
# to finish. That gives the server 30 seconds to write all of
# the data to the database, and finish the view.
try:
message = Message.objects.get(pk=message_id)
except Exception as exc:
raise SendMessage.retry(exc=exc)
if not gateway_id:
if hasattr(message.billee, 'sms_gateway'):
gateway = message.billee.sms_gateway
else:
gateway = Gateway.objects.all()[0]
else:
gateway = Gateway.objects.get(pk=gateway_id)
# Check we have a credits to sent me message
account = Account.objects.get(user=message.sender)
# I'm getting the non-cathed version here, check performance!!!!!
if account._balance() >= message.length:
response = gateway._send(message)
if response.status == 'Sent':
# Take credit from users account.
transaction = Transaction(
account=account,
amount=- message.charge,
description="Debit: SMS Sent",
)
transaction.save()
message.billed = True
message.save()
else:
pass
logging.debug("Done sending message.")
class ProcessRequests(Task):
name = "Request to Process"
max_retries = 1
default_retry_delay = 3
def run(self, message_batch):
for e in Contact.objects.filter(contact_owner=message_batch.user, group=message_batch.group):
msg = Message.objects.create(
recipient_number=e.mobile,
content=message_batch.content,
sender=e.contact_owner,
billee=message_batch.user,
sender_name=message_batch.sender_name
)
gateway = Gateway.objects.get(pk=2)
msg.send(gateway)
#replace('[FIRSTNAME]', e.first_name)
tried:
ProcessRequests.delay(batch) should work gives error error object.__new__() takes no parameters
ProcessRequests().delay(batch) also gives error error object.__new__() takes no parameters
I was able to reproduce your issue:
import celery
from celery.task import Task
#celery.task
class Foo(celery.Task):
name = "foo"
def run(self, batch):
print 'Foo'
class Bar(celery.Task):
name = "bar"
def run(self, batch):
print 'Bar'
# subclass deprecated base Task class
class Bar2(Task):
name = "bar2"
def run(self, batch):
print 'Bar2'
#celery.task(name='def-foo')
def foo(batch):
print 'foo'
Output:
In [2]: foo.delay('x')
[WARNING/PoolWorker-4] foo
In [3]: Foo().delay('x')
[WARNING/PoolWorker-2] Foo
In [4]: Bar().delay('x')
[WARNING/PoolWorker-3] Bar
In [5]: Foo.delay('x')
TypeError: object.__new__() takes no parameters
In [6]: Bar.delay('x')
TypeError: unbound method delay() must be called with Bar instance as first argument (got str instance instead)
In [7]: Bar2.delay('x')
[WARNING/PoolWorker-1] Bar2
I see you use deprecated celery.task.Task base class, this is why you don't get unbound method errors:
Definition: Task(self, *args, **kwargs)
Docstring:
Deprecated Task base class.
Modern applications should use :class:`celery.Task` instead.
I don't know why ProcessRequests doesn't work though. Maybe it is some caching issues, you may have tried to apply the decorator to your class before and it got cached, and this is exactly the error that you get when you try to apply this decorator to a Task class.
Delete all .pyc file, restart celery workers and try again.
Don't use classes directly
Tasks are instantiated only once per (worker) process, so creating objects of task classes (on client-side) every time doesn't make sense, i.e. Bar() is wrong.
Foo.delay() or Foo().delay() might or might not work, depends on combination of decorator name argument and class name attribute.
Get the task object from celery.registry.tasks dictionary or just use #celery.task decorator on functions (foo in my example) instead.