So I have an error handler like this:
#app.errorhandler(500)
def internal_server_error(error):
utils.send_error_to_slack()
return render('500'), 500
One of the great features of the Google App Engine environment is, if an exception happens, it gets tracked in StackDriver.
Problem is, by using this error handler, I catch all exceptions in my app and nothing makes it up to StackDriver.
So what I want to do is have this error handler, which provides a nice result for our end users, and then re-raise the original exception. Though I'm not quite sure how to do this as the function ends with 'return' and I can't raise before the return -- and I can't raise after it. I know I have to wrap this function somehow, just not sure how.
I did see the "PROPAGATE_EXCEPTIONS" option, but activating it doesn't allow the nice-end-user-error-page to be rendered.
Thanks in advance for your help.
Related
I'm developing an API using Django Rest Framework. In the codes, there might be a lot of exceptional situations that we may not think of and cause 5XX errors. One approach to handle these errors is to put a try-exception in the whole function (Or places you guess an unknown error might occur). I want to show the details of the error in the API response (Not just in the logging server) when the debug mode is on in Django. Also, I want to hide these details in production mode (when debug is false) and just pass the error code with no body when the error is 5XX.
Is there any official (and efficient) way to do that? I was thinking of creating a middleware to handle that, but in this case, I should check all responses before passing the response to the user, and this is not efficient at all. I thought there might be a built-in function to handle this stuff, and I can override that to hide the body of the 5XX errors.
Update 1:
For example, I'm using this code in my view:
try: # Check if the code is valid
# Some code which results in Exception
except Exception as e: # noqa
return Response(str(e), status=status.HTTP_500_INTERNAL_SERVER_ERROR)
I'm getting the same error when the DEBUG=False and DEBUG=True. In my case, the error is:
"Cannot resolve keyword 'keyword' into field. Choices are: keyword1, keyword2, keyword3, keyword4"
And this response exposes some sensitive fields in the frontend. It seems the DEBUG=False will only work for the Django errors, not the errors you are generating (Which totally makes sense because I'm explicitly returning the error in the response).
It seems the DEBUG=False works on just the exception trace screen not manually generated 5XX errors.
If you remove the try/except Django should just throw an error as normal - this will display details if settings.DEBUG, otherwise it will show an error template which you can customize to meet your needs. https://docs.djangoproject.com/en/4.0/topics/http/views/#customizing-error-views
After seeing the additional information, you just need to add an if statement to decide what to put in the response body. This is what Django already does by default. Alternatively, just let Django do this for you and remove the try...except.
Original Answer:
I want to show the details of the error in the API response (Not just in the logging server) when the debug mode is on in Django. Also, I want to hide these details in production mode (when debug is false) and just pass the error code with no body when the error is 5XX.
You already answered your question. Setting DEBUG = True in settings.py will give an HTML response with the stack trace and other debugging information. Setting DEBUG = False, turns off this debugging information and does exactly as you say.
One particular function in my Django server performs a lot of computation during a request and that computation involves a lot of local variables. When an error is thrown during the computation, Django spits out all the function's local variables onto the error page. This process takes Django 10-15 seconds (presumably because of the number/size of my local variables)...
I want to disable the error page for just that particular function, so I can get to see the simple stacktrace faster.
I tried using the sensitive_variables decorator but that doesn't do anything when DEBUG=True.
I also tried simply catching the exception and throwing a new exception:
try:
return mymodule.performComputations(self)
except Exception as e:
raise Exception(repr(e))
but Django still manages to output all the local variables from the performComputations method.
My only option right now is to set DEBUG=False globally anytime I'm actively developing that particular code flow / likely to encounter errors. Any other suggestions? Thanks!
I run flask in debug mode and quite often, when I make changes and reload a page, I get thrown a No user_loader exception
Exception: No user_loader has been installed for this LoginManager. Refer tohttps://flask-login.readthedocs.io/en/latest/#how-it-works for more info.
I have a user_loader written right after I define my User class (it's moved around):
#login.user_loader
def load_user(id):
return User.query.get(int(id))
This error persists on every refresh of the page until I reset the flask app itself despite being in debug mode. Then the error disappears.
Is this a known bug or something to be expected?
UPDATE
So it's been a while since I posted this question but it just got an upvote so someone is experiencing a similar problem. I've gotten more experience with this problem so I might be able to elucidate the problem a bit:
After a major refactor of my app I started getting a similar sort of exception (can't remember the exact exception) essentially saying that a given module can't be found (I believe it was a route). It seems to occur most often when I make certain changes to the SQLA models or some other kind of extensive change.
I wish I could be more clear but the error is mysterious and it often appears when I least expect it. There is certainly a kind of change that can be made to the code that results in the debug-mode server failing and needing to be restarted.
I know that is still not very illuminating, but it's certainly more accurate than the first half of this post.
I ran into this issue recently. I was also getting this error:
Exception: No user_loader has been installed for this LoginManager. Refer to https://flask-login.readthedocs.io/en/latest/#how-it-works for more info.
Here was my declaration of the user_loader function:
#login_manager.user_loader
def load_user(user_id):
# since the user_id is just the primary key of our user table, use it in the query for the user
return User.query.get(int(user_id))
It turns out, as I was copying pasting from the tutorial, I pasted the load_user function AFTER:
return app
In other words, the execution of the code never reached my user_loader declaration. Make sure your return statement is after your user_loader declaration.
How to make piston return full traceback of exception. By default it returns me only last error text. Like
id() takes exactly one argument (0 given)
Need to know which file and which line...
Piston loads a http status response via utils.rc, no errors are raised.
from the documentation:
Configuration variables
Piston is configurable in a couple of ways, which allows more granular
control of some areas without editing the code.
Setting Meaning
settings.PISTON_EMAIL_ERRORS If (when) Piston crashes, it will email the
administrators a backtrace (like the Django one
you see during DEBUG = True)
settings.PISTON_DISPLAY_ERRORS Upon crashing, will display a small backtrace
to the client, including the method signature
expected.
settings.PISTON_STREAM_OUTPUT When enabled, Piston will instruct Django to
stream the output to the client, but please read
streaming before enabling it.
I'd recommend to setup a logger, sentry together with raven is rather convenient and you get to configure your own log level and handler.
How can I get Django 1.0 to write all errors to the console or a log file when running runserver in debug mode?
I've tried using a middleware class with process_exception function as described in the accepted answer to this question:
How do you log server errors on django sites
The process_exception function is called for some exceptions (eg: assert(False) in views.py) but process_exception is not getting called for other errors like ImportErrors (eg: import thisclassdoesnotexist in urs.py). I'm new to Django/Python. Is this because of some distinction between run-time and compile-time errors? But then I would expect runserver to complain if it was a compile-time error and it doesn't.
I've watched Simon Willison's fantastic presentation on Django debugging (http://simonwillison.net/2008/May/22/debugging/) but I didn't see an option that would work well for me.
In case it's relevant, I'm writing a Facebook app and Facebook masks HTTP 500 errors with their own message rather than showing Django's awesomely informative 500 page. So I need a way for all types of errors to be written to the console or file.
Edit: I guess my expectation is that if Django can return a 500 error page with lots of detail when I have a bad import (ImportError) in urls.py, it should be able to write the same detail to the console or a file without having to add any additional exception handling to the code. I've never seen exception handling around import statements.
Thanks,
Jeff
It's a bit extreme, but for debugging purposes, you can turn on the DEBUG_PROPAGATE_EXCEPTIONS setting. This will allow you to set up your own error handling. The easiest way to set up said error handling would be to override sys.excepthook. This will terminate your application, but it will work. There may be things you can do to make this not kill your app, but this will depend on what platform you're deploying this for. At any rate, never use this in production!
For production, you're pretty much going to have to have extensive error handling in place. One technique I've used is something like this:
>>> def log_error(func):
... def _call_func(*args, **argd):
... try:
... func(*args, **argd)
... except:
... print "error" #substitute your own error handling
... return _call_func
...
>>> #log_error
... def foo(a):
... raise AttributeError
...
>>> foo(1)
error
If you use log_error as a decorator on your view, it will automatically handle whatever errors happened within it.
The process_exception function is called for some exceptions (eg: assert(False) in views.py) but process_exception is not getting called for other errors like ImportErrors (eg: import thisclassdoesnotexist in urs.py). I'm new to Django/Python. Is this because of some distinction between run-time and compile-time errors?
In Python, all errors are run-time errors. The reason why this is causing problems is because these errors occur immediately when the module is imported before your view is ever called. The first method I posted will catch errors like these for debugging. You might be able to figure something out for production, but I'd argue that you have worse problems if you're getting ImportErrors in a production app (and you're not doing any dynamic importing).
A tool like pylint can help you eliminate these kinds of problems though.
The process_exception function is
called for some exceptions (eg:
assert(False) in views.py) but
process_exception is not getting
called for other errors like
ImportErrors (eg: import
thisclassdoesnotexist in urs.py). I'm
new to Django/Python. Is this because
of some distinction between run-time
and compile-time errors?
No, it's just because process_exception middleware is only called if an exception is raised in the view.
I think DEBUG_PROPAGATE_EXCEPTIONS (as mentioned first by Jason Baker) is what you need here, but I don't think you don't need to do anything additional (i.e. sys.excepthook, etc) if you just want the traceback dumped to console.
If you want to do anything more complex with the error (i.e. dump it to file or DB), the simplest approach would be the got_request_exception signal, which Django sends for any request-related exception, whether it was raised in the view or not.
The get_response and handle_uncaught_exception methods of django.core.handlers.BaseHandler are instructive (and brief) reading in this area.
without having to add any additional
exception handling to the code. I've
never seen exception handling around
import statements.
Look around a bit more, you'll see it done (often in cases where you want to handle the absence of a dependency in some particular way). That said, it would of course be quite ugly if you had to go sprinkling additional try-except blocks all over your code to make a global change to how exceptions are handled!
First, there are very few compile-time errors that you'll see through an exception log. If your Python code doesn't have valid syntax, it dies long before logs are opened for writing.
In Django runserver mode, a "print" statement writes to stdout, which you can see. This is not a good long-term solution, however, so don't count on it.
When Django is running under Apache, however, it depends on which plug-in you're using. mod_python isn't easy to deal with. mod_wsgi can be coerced into sending stdout and stderr to a log file.
Your best bet, however, is the logging module. Put an initialization into your top-level urls.py to configure logging. (Or, perhaps, your settings.py)
Be sure that every module has a logger available for writing log messages.
Be sure that every web services call you make has a try/except block around it, and you write the exceptions to your log.
http://groups.google.com/group/django-nashville/browse_thread/thread/b4a258250cfa285a?pli=1
If you are on a *nix system you could
write to a log (eg. mylog.txt) in python
then run "tail -f mylog.txt" in the console
this is a handy way to view any kind of log in near real time