I'm trying to create a Logger object which can log info to my console without having the root name.
# Set up logger.
logger = logging.getLogger()
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(logging.Formatter("%(levelname)s:%(message)s"))
logger.addHandler(handler)
logger.info("test")
Returns two logging messages: the correct one set up by handler and the original if i hadn't added a handler, what's the issue?
INFO:root:test
INFO:test
After messing around with it, I'm finding that this only occurs if a) I am adding the handler or b) I import another module with a logger.
I thought you've missed
logger.setLevel(logging.DEBUG)
before doing logging, you just set for your handler
without this, I could not get any output
and since you got two output, maybe you got other files that also create an logger ?
Related
I would like to add some extra data to my log output depending on where it was logged. For instance I might have System A and System B, both spitting out log data, but perhaps I'm only interested in System B's log lines now so I could filter based on the tag.
In a previous system I had a log function that looked like LOG(level, tag, message) but I could also see a solution that could involve instantiating a logger with a tag for each system that would pipe to a default logger that catches all messages. So spdlog::tagged_logger systemALogger("System A");
This is almost the answer since loggers can have names, I could use the names as a tag, but can loggers redirect to a default logger? The default logger has several sinks that I would have to attach to the named logging solution.
So the final question would be, is there a way to add a custom tag to log messages in spdlog?
Once you have your log configured, you can clone() it and pass a custom logger name that will appear in the log output (given you have not change the formatting).
auto logger = spdlog::default_logger()->clone("my_logger");
logger->info("message from custom logger");
spdlog::info("message from default logger");
The contents of the basic-log.txt is as follows:
[2020-12-02 09:35:58.801] [my_logger] [info] message from custom logger
[2020-12-02 09:35:58.802] [info] message from default logger
I'm trying to get to grips with Python's logging module which frankly so far has not been approachable. Currently I have one 'main' logger in my main script:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.debug(
'{} run for {} using {} values.'.format(
skill, str(datetime.now()), key, mode
)
)
and I have a secondary logger in an imported module:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
However, although I tell both loggers to log to a file only (both only have the handlers I've set), I still get information printed to stout from the root logger. Calling logging.root.handlers shows the logger has a StreamHandler which only appears when importing the module containing the second module.
My hacking method of solving the additional stream is to just delete from the roots handlers. However, this feels like a non-canonical solution. I'm assuming I've implemented the module wrong in some way rather than this being the intended function of the module. How are you meant to set up loggers in this hierarchical fashion correctly?
A proper [mcve] would certainly help here - I can't reproduce this root logger handler suddenly appearing out of the blue.
This being said, you're doing it wrong anyway: one of the main goals of the logging module and which is not clearly, explicitely documented is to separate logger's usage (.getLogger() and logger.log()) calls from logging configuration.
The point is that library code cannot know in which context it will be used - only the application will -, so library code should NOT try to configure their loggers in anyway - just get a logger and use it, period. Then it's up to the application (here your main.py script) to configure the loggers for the libs (hint: the .dictConfig() function is by far the most usable way to configure eveything at once).
In an open-source Django library I am working on, the logging module is used mainly to notice user for potential errors:
import logging
logger = logging.getLogger(__name__)
def my_wonderful_function():
# ... some code ...
if problem_detected:
logger.error('Please pay attention, something is wrong...')
Since this approach works fine most of the time, if Python 2 users doesn't configure the logging system for my package, they get the error:
No handlers could be found for logger "library.module"
as soon as the logger is used.
This error does not print out to Python 3 user, since there is a fallback mechanism to output messages to a default StreamHandler when no handler can be found for a specific logger (See the code).
My question is:
Is there a good way to report errors and warnings to user, but to print nothing (and in particular no error) when the user don't want to configure logging?
I finally found a solution that works well. I added these 2 functiond in my code:
def logger_has_handlers(logger):
"""Since Python 2 doesn't provide Logger.hasHandlers(), we have to
perform the lookup by ourself."""
if six.PY3:
return logger.hasHandlers()
else:
c = logger
rv = False
while c:
if c.handlers:
rv = True
break
if not c.propagate:
break
else:
c = c.parent
return rv
def get_default_logger(name):
"""Get a logger from default logging manager. If no handler
is associated, add a default NullHandler"""
logger = logging.getLogger(name)
if not logger_has_handlers(logger):
# If logging is not configured in the current project, configure
# this logger to discard all logs messages. This will prevent
# the 'No handlers could be found for logger XXX' error on Python 2,
# and avoid redirecting errors to the default 'lastResort'
# StreamHandler on Python 3
logger.addHandler(logging.NullHandler())
return logger
Then, when I need a logger, instead of calling logging.getLogger(__name__), I use get_default_logger(__name__). With this solution, the returned logger always contains at least 1 handler.
If user configured logging by himself, the returned logger contains references to defined handlers. If user didn't configure logging for the package (or configured it without any handler), the NullHandler instance associated with the logger ensure no error will be printed when it is used.
I'm creating a python script and class object that passes logging handlers into the latter. The handlers are created using the logging class like so:
#Debug Handler
debug_handler = logging.FileHandler('Debug.log')
debug_handler.setLevel(logging.DEBUG)
#Info Handler
info_handler = logging.FileHandler('Normal.log')
info_handler.setLevel(logging.INFO)
These handler objects are passed directly into the object initializer:
def __init__(self, type, path, info_handler = False, debug_handler = False):
#Establishes Class Logger
self.logger = logging.getLogger('LoggerName')
self.logger.setLevel(logging.DEBUG)
if (info_handler):
self.logger.addHandler(info_handler)
if (debug_handler):
self.logger.addHandler(debug_handler)
My goal is to make the handlers completely optional for the object class, but to use them I must scatter calls across the code as frequently as print statements. Ex:
self.logger.info('INITIALIZING RESULTS OBJECT')
Which means that it will error should no handlers be passed. How can I manage/nullify these statements without placing try/catch on every single call in the code?
Update: There should be no issue with calling a logger when no handlers are present. The library simply prints out a statement acknowledging the lack of a handler. My error was caused by trying to add a handler that was not defined, and this was easily rectified by the statement below:
if (info_handler):
self.logger.addHandler(info_handler)
if (debug_handler):
self.logger.addHandler(debug_handler)
I'll leave this question up to show basic syntax for the logging library.
I just installed django-sentry, and it catches exceptions just fine. However calls to logger.warning and logger.error are not being saved to Sentry for some reason.
Sentry implementation:
import logging
from sentry.client.handlers import SentryHandler
logger = logging.getLogger()
# ensure we havent already registered the handler
if SentryHandler not in map(lambda x: x.__class__, logger.handlers):
logger.addHandler(SentryHandler())
# Add StreamHandler to sentry's default so you can catch missed exceptions
logger = logging.getLogger('sentry.errors')
logger.propagate = False
logger.addHandler(logging.StreamHandler())
logger call:
logger.warning("Category is not an integer", exc_info=sys.exc_info(), extra={'url': request.build_absolute_uri()})
Any ideas?
Thanks!
You're attaching the sentry stream handler to logging.getLogger('sentry.errors'). That means that logs to sentry.errors or below that will use that sentry stream handler.
But logs to 'my_app.something' don't end up there! So you're missing almost all log messages.
Solution: attach the stream handler to the root logger: logging.getLogger('').
Try to add:
logger.setLevel(logging.DEBUG)
after the logging.getlogger, it works for me on dev env.
You can add FileHandler to 'sentry.errors' logger and the check if it contains some error on production. But really stdout should be wrote to server logs. Can check this.