I'm using tornado scheduler of apscheduler. Whenever a task is called, I need to log it's exceptions. For handling exceptions I've created a decorator that will get the future object and will take appropriate action. It's working fine, however it's not logging inside the callback function of future. I've done Pdb inside of the callback, and logger instance properties are as expected, still it's not logging in file at all.
Code is,
logger = logging.getLogger('myLogger')
def handle_callback(result):
logger.debug(result.result())
logger.info(result.result())
logger.error(result.result())
logger.exception(result.result())
logger.exception(result.exc_info())
def handle_exceptions():
def wrapped(fn):
#wraps(fn)
def wrapper(*args, **kwargs):
future = fn(*args, **kwargs)
future.add_done_callback(handle_callback)
return wrapper
return wrapped
#handle_exceptions()
#gen.coroutine
def run_task(job_id):
logger.info('executing job {}'.format(job_id))
raise MyException
P.S. I'm using python2.7
The wrapper is missing the return of the future - without this, ioloop won't continue if there would be some async call. Let's add some async call
#handle_exceptions
#gen.coroutine
def run_task(job_id):
logger.info('executing job {}'.format(job_id))
yield gen.sleep(1)
raise Exception('blah')
As you might noted, I've removed () from decorator to simplify it. It doesn't have to be nested. So the decorator could look like:
def handle_exceptions(fn):
#wraps(fn)
def wrapper(*args, **kwargs):
future = fn(*args, **kwargs)
future.add_done_callback(handle_callback)
return future # <<< we need this
return wrapper
Next, the handler's callback is calling Future.result() that will immediately re-raise exception. So it's better to check if there is an exception in the first place:
def handle_callback(result):
exc_info = result.exc_info()
if exc_info:
logger.error('EXCEPTION %s', exc_info)
Putting this together in simple example:
import logging
from functools import wraps, partial
from tornado import gen, ioloop
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handle_callback(result):
exc_info = result.exc_info()
if exc_info:
logger.error('EXCEPTION %s', exc_info)
def handle_exceptions(fn):
#wraps(fn)
def wrapper(*args, **kwargs):
future = fn(*args, **kwargs)
future.add_done_callback(handle_callback)
return future
return wrapper
#handle_exceptions
#gen.coroutine
def run_task(job_id):
logger.info('executing job {}'.format(job_id))
yield gen.sleep(1)
raise Exception('blah')
ioloop.IOLoop.instance().run_sync(partial(run_task, 123))
Since the question does not provide any info about logging by itself, I've used the standard with changed level.
The code output:
INFO:root:executing job 123
ERROR:root:EXCEPTION (<type 'exceptions.Exception'>, Exception('blah',), <traceback object at 0x7f807df07dd0>)
Traceback (most recent call last):
File "test.py", line 31, in <module>
ioloop.IOLoop.instance().run_sync(partial(run_task, 123))
File "/tmp/so/lib/python2.7/site-packages/tornado/ioloop.py", line 458, in run_sync
return future_cell[0].result()
File "/tmp/so/lib/python2.7/site-packages/tornado/concurrent.py", line 238, in result
raise_exc_info(self._exc_info)
File "/tmp/so/lib/python2.7/site-packages/tornado/gen.py", line 1069, in run
yielded = self.gen.send(value)
File "test.py", line 28, in run_task
raise Exception('blah')
Exception: blah
If there are any other issue, I presume it's related to the logging config/setup.
Related
I've got the following custom action in my view:
class OrderAPIViewSet(viewsets.ViewSet):
def create(self, request):
print("Here: working")
#action(detail=True, methods=['post'])
def add(self, request, *arg, **kwargs):
print("HERE in custom action")
order = self.get_object()
print(order)
my app's urls.py is:
from rest_framework import routers
from .views import OrderAPIViewSet
router = routers.DefaultRouter()
router.register(r'orders', OrderAPIViewSet, basename='order')
urlpatterns = router.urls
So in my test when I try to access orders/post it works, but when I try to access orders/{pk}/add it fails. I mean, the reverse itself is failing:
ORDERS_LIST_URL = reverse('order-list')
ORDERS_ADD_URL = reverse('order-add')
class PublicOrderApiTests(TestCase):
def test_sample_test(self):
data = {}
res = self.client.post(ORDERS_ADD_URL, data, format='json')
as I said before, I've got a separate test where I use ORDERS_LIST_URL like this:
res = self.client.post(ORDERS_LIST_URL, data, format='json')
but when running the test I'm getting the following error:
ImportError: Failed to import test module: orders.tests Traceback
(most recent call last): File
"/usr/local/lib/python3.7/unittest/loader.py", line 436, in
_find_test_path
module = self._get_module_from_name(name) File "/usr/local/lib/python3.7/unittest/loader.py", line 377, in
_get_module_from_name
import(name) File "/app/orders/tests.py", line 22, in
ORDERS_ADD_URL = reverse('order-add') File "/usr/local/lib/python3.7/site-packages/django/urls/base.py", line 87,
in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File "/usr/local/lib/python3.7/site-packages/django/urls/resolvers.py",
line 685, in _reverse_with_prefix
raise NoReverseMatch(msg) django.urls.exceptions.NoReverseMatch: Reverse for 'order-add' with no arguments not found. 2 pattern(s)
tried: ['orders/(?P[^/.]+)/add\.(?P[a-z0-9]+)/?$',
'orders/(?P[^/.]+)/add/$']
---------------------------------------------------------------------- Ran 1 test in 0.000s
FAILED (errors=1)
according to the documentation I shouldn't need to register this endpoint, the router is supposed to do it by itself. What am I missing?
The first thing that you've missed is pk in your reverse. Since the add API needs a pk of your Order object, you need to pass it to reverse function. For example:
order_add_url = reverse('order-add', kwargs={'pk': 1})
print(order_add_url) # which will print '/orders/1/add/'
So I think you should move this part to the body of PublicOrderApiTests's methods since you need a dynamic url per test object.
Another problem is that the ViewSet class does not support self.get_object() and if you want to use this method you should either have your own method or use rest framework GenericViewSet (i.e. from rest_framework.viewsets import GenericViewSet and inherit from this class instead of ViewSet) then you can access the get_object() method. You can also read more about generic views in rest framework docs.
I'm trying to setup subscription on graphene-django and channels using channels_graphql_ws.
I'm getting the following error when trying to run my subscription query:
An error occurred while resolving field Subscription.onNewComment
Traceback (most recent call last):
File "/Users/noroozim/.pyenv/versions/nexus37/lib/python3.7/site-packages/graphql/execution/executor.py", line 450, in resolve_or_error
return executor.execute(resolve_fn, source, info, **args)
File "/Users/noroozim/.pyenv/versions/nexus37/lib/python3.7/site-packages/graphql/execution/executors/sync.py", line 16, in execute
return fn(*args, **kwargs)
File "/Users/noroozim/.pyenv/versions/nexus37/lib/python3.7/site-packages/channels_graphql_ws/subscription.py", line 371, in _subscribe
register_subscription = root.register_subscription
AttributeError: 'NoneType' object has no attribute 'register_subscription'
Here is what I have in my setup:
# /subscription.py/
class OnNewComment(channels_graphql_ws.Subscription):
comment = Field(types.UserCommentNode)
class Arguments:
content_type = String(required=False)
def subscribe(root, info, content_type):
return [content_type] if content_type is not None else None
def publish(self, info, content_type=None):
new_comment_content_type = self["content_type"]
new_comment = self["comment"]
return OnNewComment(
content_type=content_type, comment=new_comment
)
#classmethod
def new_comment(cls, content_type, comment):
cls.broadcast(
# group=content_type,
payload={"comment": comment},
)
I'm not sure if this is a bug or if I'm missing something.
I found out that graphne's graphiql template doesn't come with websocket support and I had to modify my graphene/graphiql.html file to incorporate websocket to get it to work.
I am trying to use celery for my app which is made in flask but I get the following error "Working outside of request context". It sounds like I am trying to access a request object before the front end makes a request, but I cannot figure out what is wrong. I appreciate if you can let me know what is the problem.
[2017-04-26 13:33:04,940: INFO/MainProcess] Received task: app.result[139a2679-e9df-49b9-ab42-1f53a09c01fd]
[2017-04-26 13:33:06,168: ERROR/PoolWorker-2] Task app.result[139a2679-e9df-49b9-ab42-1f53a09c01fd] raised unexpected: RuntimeError('Working outside of request context.\n\nThis typically means that you attempted to use functionality that needed\nan active HTTP request. Consult the documentation on testing for\ninformation about how to avoid this problem.',)
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/Users/Pooneh/projects/applications/ray_tracer_app_flask/flask_celery.py", line 14, in __call__
return TaskBase.__call__(self, *args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 622, in __protected_call__
return self.run(*args, **kwargs)
File "/Users/Pooneh/projects/applications/ray_tracer_app_flask/app.py", line 33, in final_result
light_position = request.args.get("light_position", "(0, 0, 0)", type=str)
File "/Library/Python/2.7/site-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/Library/Python/2.7/site-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/Library/Python/2.7/site-packages/flask/globals.py", line 37, in _lookup_req_object
raise RuntimeError(_request_ctx_err_msg)
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
app.py
app = Flask(__name__)
app.config.update(CELERY_BROKER_URL = 'amqp://localhost//',
CELERY_RESULT_BACKEND='amqp://localhost//')
celery = make_celery(app)
#app.route('/')
def my_form():
return render_template("form.html")
#app.route('/result')
def result():
final_result.delay()
return "celery!"
#celery.task(name='app.result')
def final_result():
light_position = request.args.get("light_position", "(0, 0, 0)", type=str)
light_position_coor = re.findall("[-+]?\d*\.\d+|[-+]?\d+", light_position)
x = float(light_position_coor[0])
y = float(light_position_coor[1])
z = float(light_position_coor[2])
encoded = base64.b64encode(open("/Users/payande/projects/applications/app_flask/static/pic.png", "rb").read())
return jsonify(data=encoded)
Celery tasks are run by a background worker asynchronously outside of the HTTP request (which is one of they main benefits of using them), so you cannot access the request object within the task.
You could pass the data to the task as arguments instead:
final_result.delay(request.args.get("light_position"))
#celery.task(name='app.result')
def final_result(light_position):
...
Of course this also means that the return value of the task cannot be used in a HTTP response (since the task can complete after the response has been already sent).
I am currently trying work on a project using Twisted Python, my problem specifically is my attempt to gain user input whilst also listening for connections using listenTCP(). I looked up the problem originally and found that stdio.StandardIO seems the most efficient way of doing so since I am already using Twisted. I have also seen the code examples found on twisted matrix, stdin.py and also stdiodemo.py however I am struggling with how to apply to example code to my specific problem given I need to read from the socket and also gather user input while performing tcp tasks.
The project I am working on is much larger however the small example code illustrates what I am trying to do and isolates the problem I am having. Any help in solving my problem is really appreciated.
Server.py
from twisted.internet.protocol import Factory
from twisted.protocols import basic
from twisted.internet import reactor, protocol, stdio
from Tkinter import *
import os, sys
class ServerProtocol(protocol.Protocol):
def __init__(self, factory):
self.factory = factory
stdio.StandardIO(self)
def connectionMade(self):
self.factory.numConnections += 1
self.factory.clients.append(self)
def dataReceived(self, data):
try:
print 'receiving data'
print data
except Exception, e:
print e
def connectionLost(self, reason):
self.factory.numConnections -= 1
self.factory.clients.remove(self)
class ServerFactory(Factory):
numConnections = 0
def buildProtocol(self, addr):
return ServerProtocol(self)
class StdioCommandLine(basic.LineReceiver):
from os import linesep as delimiter
def connectionMade(self):
self.transport.write('>>> ')
def lineReceived(self, line):
self.sendLine('Echo: ' + line)
self.transport.write('>>> ')
reactor.listenTCP(9001, ServerFactory())
stdio.StandardIO(StdioCommandLine())
reactor.run()
Client.py
from twisted.internet import reactor, protocol
import os, time, sys
import argparse
class MessageClientProtocol(protocol.Protocol):
def __init__(self, factory):
self.factory = factory
def connectionMade(self):
self.sendMessage()
def sendMessage(self):
print 'sending message'
try:
self.transport.write('hello world')
except e as Exception:
print e
def dataReceived(self, data):
print 'received: ', data
self.sendMessage()
class MessageClientFactory(protocol.ClientFactory):
def __init__(self, message):
self.message = message
def buildProtocol(self, addr):
return MessageClientProtocol(self)
def clientConnectionFailed(self, connector, reason):
print 'Connection Failed: ', reason.getErrorMessage()
reactor.stop()
def clientConnectionLost(self, connector, reason):
print 'Connection Lost: ', reason.getErrorMessage()
reactor.connectTCP('192.168.1.70', 9001, MessageClientFactory('hello world - client'))
reactor.run()
At the moment the above code is returning an Unhanded Error as follows. This demonstrates me using stdin, then it returning the data to stdout and a client the connects causing the error:
python Server.py
>>> hello
Echo: hello
>>> Unhandled Error Traceback (most recent call last):
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/python/log.py",
line 84, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/python/context.py",
line 118, in callWithContext return
self.currentContext().callWithContext(ctx, func, *args, **kw)
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/python/context.py",
line 81, in callWithContext
return func(*args,**kw)
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/selectreactor.py",
line 149, in _doReadOrWrite
why = getattr(selectable, method)()
--- exception caught here ---
File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/twisted/internet/tcp.py",
line 1067, in doRead
protocol = s
The traceback you provided seems to be cut off. I tried to run the code on my machine and it shows this traceback:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/twisted/python/log.py", line 84, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/usr/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/usr/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
File "/usr/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 597, in _doReadOrWrite
why = selectable.doRead()
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/twisted/internet/tcp.py", line 1067, in doRead
protocol = self.factory.buildProtocol(self._buildAddr(addr))
File "Server.py", line 30, in buildProtocol
return ServerProtocol(self)
File "Server.py", line 10, in __init__
stdio.StandardIO(self)
File "/usr/lib/python2.7/site-packages/twisted/internet/_posixstdio.py", line 42, in __init__
self.protocol.makeConnection(self)
File "/usr/lib/python2.7/site-packages/twisted/internet/protocol.py", line 490, in makeConnection
self.connectionMade()
File "Server.py", line 14, in connectionMade
self.factory.clients.append(self)
exceptions.AttributeError: ServerFactory instance has no attribute 'clients'
As can be easily seen with the full traceback, the factory is missing the client attribute. This can be fixed e.g. by adding this to your ServerFactory class:
def __init__(self):
self.clients = []
I'm trying to test caching in my code. I am using memcached as the backend. I set the CACHE config to use memcached under 'basic'. There isn't a direct route to the get_stuff method. Here is my code:
I have a view that looks like
from .models import MyModel
from django.views.decorators.cache import cache_page
class MyView(TemplateView):
""" Django view ... """
template_name = "home.html"
#cache_page(60 * 15, cache="basic")
def get_stuff(self): # pylint: disable=no-self-use
""" Get all ... """
return MyModel.objects.filter(visible=True, type=MyModel.CONSTANT)
def get_context_data(self, **kwargs):
context = super(MyView, self).get_context_data(**kwargs)
stuffs = self.get_stuff()
if stuffs:
context['stuff'] = random.choice(stuffs)
return context
I also have a test that looks like
from django.test.client import RequestFactory
from xyz.apps.appname import views
class MyViewTestCase(TestCase):
""" Unit tests for the MyView class """
def test_caching_get_stuff(self):
""" Tests that we are properly caching the query to get all stuffs """
view = views.MyView.as_view()
factory = RequestFactory()
request = factory.get('/')
response = view(request)
print response.context_data['stuff']
When I run my test I get this error:
Traceback (most recent call last):
File "/path/to/app/appname/tests.py", line 142, in test_caching_get_stuff
response = view(request)
File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py", line 87, in dispatch
return handler(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py", line 154, in get
context = self.get_context_data(**kwargs)
File "/path/to/app/appname/views.py", line 50, in get_context_data
stuffs = self.get_stuff()
File "/usr/local/lib/python2.7/site-packages/django/utils/decorators.py", line 91, in _wrapped_view
result = middleware.process_request(request)
File "/usr/local/lib/python2.7/site-packages/django/middleware/cache.py", line 134, in process_request
if not request.method in ('GET', 'HEAD'):
AttributeError: 'MyView' object has no attribute 'method'
What is causing this and how do I fix this? I'm fairly new to Python and Django.
Can you show the what you have for MIDDLEWARE_CLASSES in settings.py? I looked through the code where your error showed up, and it notes that FetchFromCacheMiddleware must be last piece of middleware in the MIDDLEWARE_CLASSES. I wonder if that is causing your problem.
Related documentation here.
I believe the issue is that the cache_page decorator is meant to be used on view functions, not to decorate methods of class-based views. View functions take 'request' as their first argument, and if you look at the traceback, you can see that in fact the error is caused because the first argument of the function you tried to decorate is not a request but rather 'self' (i.e., the MyView object referred to in the error).
I am not sure if or how the cache_page decorator can be used for class-based views, though this page in the docs suggests a way of using it in the URLconf and I imagine you could wrap the return of ViewClass.as_view in a similar fashion. If the thing you're trying to wrap in caching is not properly a view but rather a utility function of some sort, you should drop to using the more manual lower-level cache API inside of your function (not as a decorator).