django-sentry not recording warnings, errors, etc - django

I just installed django-sentry, and it catches exceptions just fine. However calls to logger.warning and logger.error are not being saved to Sentry for some reason.
Sentry implementation:
import logging
from sentry.client.handlers import SentryHandler
logger = logging.getLogger()
# ensure we havent already registered the handler
if SentryHandler not in map(lambda x: x.__class__, logger.handlers):
logger.addHandler(SentryHandler())
# Add StreamHandler to sentry's default so you can catch missed exceptions
logger = logging.getLogger('sentry.errors')
logger.propagate = False
logger.addHandler(logging.StreamHandler())
logger call:
logger.warning("Category is not an integer", exc_info=sys.exc_info(), extra={'url': request.build_absolute_uri()})
Any ideas?
Thanks!

You're attaching the sentry stream handler to logging.getLogger('sentry.errors'). That means that logs to sentry.errors or below that will use that sentry stream handler.
But logs to 'my_app.something' don't end up there! So you're missing almost all log messages.
Solution: attach the stream handler to the root logger: logging.getLogger('').

Try to add:
logger.setLevel(logging.DEBUG)
after the logging.getlogger, it works for me on dev env.

You can add FileHandler to 'sentry.errors' logger and the check if it contains some error on production. But really stdout should be wrote to server logs. Can check this.

Related

Python logging adds additional handlers

I'm trying to get to grips with Python's logging module which frankly so far has not been approachable. Currently I have one 'main' logger in my main script:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.debug(
'{} run for {} using {} values.'.format(
skill, str(datetime.now()), key, mode
)
)
and I have a secondary logger in an imported module:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
However, although I tell both loggers to log to a file only (both only have the handlers I've set), I still get information printed to stout from the root logger. Calling logging.root.handlers shows the logger has a StreamHandler which only appears when importing the module containing the second module.
My hacking method of solving the additional stream is to just delete from the roots handlers. However, this feels like a non-canonical solution. I'm assuming I've implemented the module wrong in some way rather than this being the intended function of the module. How are you meant to set up loggers in this hierarchical fashion correctly?
A proper [mcve] would certainly help here - I can't reproduce this root logger handler suddenly appearing out of the blue.
This being said, you're doing it wrong anyway: one of the main goals of the logging module and which is not clearly, explicitely documented is to separate logger's usage (.getLogger() and logger.log()) calls from logging configuration.
The point is that library code cannot know in which context it will be used - only the application will -, so library code should NOT try to configure their loggers in anyway - just get a logger and use it, period. Then it's up to the application (here your main.py script) to configure the loggers for the libs (hint: the .dictConfig() function is by far the most usable way to configure eveything at once).

My Logger object is sending two messages instead of one

I'm trying to create a Logger object which can log info to my console without having the root name.
# Set up logger.
logger = logging.getLogger()
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(logging.Formatter("%(levelname)s:%(message)s"))
logger.addHandler(handler)
logger.info("test")
Returns two logging messages: the correct one set up by handler and the original if i hadn't added a handler, what's the issue?
INFO:root:test
INFO:test
After messing around with it, I'm finding that this only occurs if a) I am adding the handler or b) I import another module with a logger.
I thought you've missed
logger.setLevel(logging.DEBUG)
before doing logging, you just set for your handler
without this, I could not get any output
and since you got two output, maybe you got other files that also create an logger ?

Using requests library in on_failure or on_sucesss hook causes the task to retry indefinitely

This is what I have:
import youtube_dl # in case this matters
class ErrorCatchingTask(Task):
# Request = CustomRequest
def on_failure(self, exc, task_id, args, kwargs, einfo):
# If I comment this out, all is well
r = requests.post(server + "/error_status/")
....
#app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
def process(self, param_1, param_2, param_3):
...
raise IndexError
...
The worker will throw exception and then seemingly spawn a new task with a different task id Received task: process[{task_id}
Here are a couple of things I've tried:
Importing from celery.worker.request import Request and overriding on_failure and on_success functions there instead.
app.conf.broker_transport_options = {'visibility_timeout': 99999999999}
#app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
Turn off DEBUG mode
Set logging to info
Set CELERY_IGNORE_RESULT to false (Can I use Python requests with celery?)
import requests as apicall to rule out namespace conflict
Money patch requests Celery + Eventlet + non blocking requests
Move ErrorCatchingTask into a separate file
If I don't use any of the hook functions, the worker will just throw the exception and stay idle until the next task is scheduled, which what I expect even when I use the hooks. Is this a bug? I searched through and through on github issues, but couldn't find the same problem. How do you debug a problem like this?
Django 1.11.16
celery 4.2.1
My problem was resolved after I used grequests
In my case, celery worker would reschedule as soon as conn.urlopen() was being called in requests/adapters.py. Another behavior I observed was if I had another worker from another project open in the same machine, sometimes infinite rescheduling would stop. This probably was some locking mechanism that was originally intended for other purpose kicking in.
So this led me to suspect that this is indeed threading issue and after researching whether requests library was thread safe, I found some people suggesting different things.. In theory, monkey patching should have a similar effect as using grequests, but it is not the same, so just use grequests or erequests library instead.
Celery Debugging instruction is here

Using python logging in reusable Django application: "No handlers could be found for logger"

In an open-source Django library I am working on, the logging module is used mainly to notice user for potential errors:
import logging
logger = logging.getLogger(__name__)
def my_wonderful_function():
# ... some code ...
if problem_detected:
logger.error('Please pay attention, something is wrong...')
Since this approach works fine most of the time, if Python 2 users doesn't configure the logging system for my package, they get the error:
No handlers could be found for logger "library.module"
as soon as the logger is used.
This error does not print out to Python 3 user, since there is a fallback mechanism to output messages to a default StreamHandler when no handler can be found for a specific logger (See the code).
My question is:
Is there a good way to report errors and warnings to user, but to print nothing (and in particular no error) when the user don't want to configure logging?
I finally found a solution that works well. I added these 2 functiond in my code:
def logger_has_handlers(logger):
"""Since Python 2 doesn't provide Logger.hasHandlers(), we have to
perform the lookup by ourself."""
if six.PY3:
return logger.hasHandlers()
else:
c = logger
rv = False
while c:
if c.handlers:
rv = True
break
if not c.propagate:
break
else:
c = c.parent
return rv
def get_default_logger(name):
"""Get a logger from default logging manager. If no handler
is associated, add a default NullHandler"""
logger = logging.getLogger(name)
if not logger_has_handlers(logger):
# If logging is not configured in the current project, configure
# this logger to discard all logs messages. This will prevent
# the 'No handlers could be found for logger XXX' error on Python 2,
# and avoid redirecting errors to the default 'lastResort'
# StreamHandler on Python 3
logger.addHandler(logging.NullHandler())
return logger
Then, when I need a logger, instead of calling logging.getLogger(__name__), I use get_default_logger(__name__). With this solution, the returned logger always contains at least 1 handler.
If user configured logging by himself, the returned logger contains references to defined handlers. If user didn't configure logging for the package (or configured it without any handler), the NullHandler instance associated with the logger ensure no error will be printed when it is used.

Why do I get a missing handler for logger "sentry.errors"?

I installed django-sentry in an integrated fashion.
I then ran python manage.py shell and tried to log like this:
>> import logging
>> mylog = logging.getLogger('sentrylogger')
>> mylog.handlers
[<logging.StreamHandler instance at 0x9f6130c>,
<sentry.client.handlers.SentryHandler instance at 0x9ffed4c>]
>> mylog.debug('this is a test 1')
DEBUG 2011-09-28 11:10:33,178 <ipython console> 4607 -1217300800 this is a test 1
No handlers could be found for logger "sentry.errors"
Currently, nothing is written to the sentry. I believe the missing logger 'sentry.errors' is the root cause of my inability to log to sentry. Am I on the right track ?
Yes, there's a handler missing. I cannot explain why logging to one log should affect another log instance, but maybe if you try to write this before doing mylog.debug(..) it will work:
sentry_errors_log = logging.getLogger("sentry.errors")
sentry_errors_log.addHandler(logging.StreamHandler())
Furthermore, refer to the documentation about older versions that seem to add sentry.errors log handler manually:
http://readthedocs.org/docs/sentry/en/latest/config/index.html#older-versions
if you are running sentry on your own domain with HTTPS, there is a bug in sentry SNI support. check https://github.com/getsentry/raven-python/issues/523 for more details. A quick workaround is to replace DSN scheme by threaded+requests+https:
RAVEN_CONFIG = {
'dsn': 'threaded+requests+https://xxxxxxxxxxxx#sentry.example.com/1',
}