Python logging adds additional handlers - python-2.7

I'm trying to get to grips with Python's logging module which frankly so far has not been approachable. Currently I have one 'main' logger in my main script:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logger.debug(
'{} run for {} using {} values.'.format(
skill, str(datetime.now()), key, mode
)
)
and I have a secondary logger in an imported module:
logger = logging.getLogger(__name__)
handler = logging.FileHandler('debug.log')
handler.setFormatter(logging.Formatter('%(levelname)s: %(asctime)s: %(name)s: %(message)s'))
logger.addHandler(handler)
However, although I tell both loggers to log to a file only (both only have the handlers I've set), I still get information printed to stout from the root logger. Calling logging.root.handlers shows the logger has a StreamHandler which only appears when importing the module containing the second module.
My hacking method of solving the additional stream is to just delete from the roots handlers. However, this feels like a non-canonical solution. I'm assuming I've implemented the module wrong in some way rather than this being the intended function of the module. How are you meant to set up loggers in this hierarchical fashion correctly?

A proper [mcve] would certainly help here - I can't reproduce this root logger handler suddenly appearing out of the blue.
This being said, you're doing it wrong anyway: one of the main goals of the logging module and which is not clearly, explicitely documented is to separate logger's usage (.getLogger() and logger.log()) calls from logging configuration.
The point is that library code cannot know in which context it will be used - only the application will -, so library code should NOT try to configure their loggers in anyway - just get a logger and use it, period. Then it's up to the application (here your main.py script) to configure the loggers for the libs (hint: the .dictConfig() function is by far the most usable way to configure eveything at once).

Related

My Logger object is sending two messages instead of one

I'm trying to create a Logger object which can log info to my console without having the root name.
# Set up logger.
logger = logging.getLogger()
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(logging.Formatter("%(levelname)s:%(message)s"))
logger.addHandler(handler)
logger.info("test")
Returns two logging messages: the correct one set up by handler and the original if i hadn't added a handler, what's the issue?
INFO:root:test
INFO:test
After messing around with it, I'm finding that this only occurs if a) I am adding the handler or b) I import another module with a logger.
I thought you've missed
logger.setLevel(logging.DEBUG)
before doing logging, you just set for your handler
without this, I could not get any output
and since you got two output, maybe you got other files that also create an logger ?

Using requests library in on_failure or on_sucesss hook causes the task to retry indefinitely

This is what I have:
import youtube_dl # in case this matters
class ErrorCatchingTask(Task):
# Request = CustomRequest
def on_failure(self, exc, task_id, args, kwargs, einfo):
# If I comment this out, all is well
r = requests.post(server + "/error_status/")
....
#app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
def process(self, param_1, param_2, param_3):
...
raise IndexError
...
The worker will throw exception and then seemingly spawn a new task with a different task id Received task: process[{task_id}
Here are a couple of things I've tried:
Importing from celery.worker.request import Request and overriding on_failure and on_success functions there instead.
app.conf.broker_transport_options = {'visibility_timeout': 99999999999}
#app.task(base=ErrorCatchingTask, bind=True, ignore_result=True, max_retires=1)
Turn off DEBUG mode
Set logging to info
Set CELERY_IGNORE_RESULT to false (Can I use Python requests with celery?)
import requests as apicall to rule out namespace conflict
Money patch requests Celery + Eventlet + non blocking requests
Move ErrorCatchingTask into a separate file
If I don't use any of the hook functions, the worker will just throw the exception and stay idle until the next task is scheduled, which what I expect even when I use the hooks. Is this a bug? I searched through and through on github issues, but couldn't find the same problem. How do you debug a problem like this?
Django 1.11.16
celery 4.2.1
My problem was resolved after I used grequests
In my case, celery worker would reschedule as soon as conn.urlopen() was being called in requests/adapters.py. Another behavior I observed was if I had another worker from another project open in the same machine, sometimes infinite rescheduling would stop. This probably was some locking mechanism that was originally intended for other purpose kicking in.
So this led me to suspect that this is indeed threading issue and after researching whether requests library was thread safe, I found some people suggesting different things.. In theory, monkey patching should have a similar effect as using grequests, but it is not the same, so just use grequests or erequests library instead.
Celery Debugging instruction is here

Using python logging in reusable Django application: "No handlers could be found for logger"

In an open-source Django library I am working on, the logging module is used mainly to notice user for potential errors:
import logging
logger = logging.getLogger(__name__)
def my_wonderful_function():
# ... some code ...
if problem_detected:
logger.error('Please pay attention, something is wrong...')
Since this approach works fine most of the time, if Python 2 users doesn't configure the logging system for my package, they get the error:
No handlers could be found for logger "library.module"
as soon as the logger is used.
This error does not print out to Python 3 user, since there is a fallback mechanism to output messages to a default StreamHandler when no handler can be found for a specific logger (See the code).
My question is:
Is there a good way to report errors and warnings to user, but to print nothing (and in particular no error) when the user don't want to configure logging?
I finally found a solution that works well. I added these 2 functiond in my code:
def logger_has_handlers(logger):
"""Since Python 2 doesn't provide Logger.hasHandlers(), we have to
perform the lookup by ourself."""
if six.PY3:
return logger.hasHandlers()
else:
c = logger
rv = False
while c:
if c.handlers:
rv = True
break
if not c.propagate:
break
else:
c = c.parent
return rv
def get_default_logger(name):
"""Get a logger from default logging manager. If no handler
is associated, add a default NullHandler"""
logger = logging.getLogger(name)
if not logger_has_handlers(logger):
# If logging is not configured in the current project, configure
# this logger to discard all logs messages. This will prevent
# the 'No handlers could be found for logger XXX' error on Python 2,
# and avoid redirecting errors to the default 'lastResort'
# StreamHandler on Python 3
logger.addHandler(logging.NullHandler())
return logger
Then, when I need a logger, instead of calling logging.getLogger(__name__), I use get_default_logger(__name__). With this solution, the returned logger always contains at least 1 handler.
If user configured logging by himself, the returned logger contains references to defined handlers. If user didn't configure logging for the package (or configured it without any handler), the NullHandler instance associated with the logger ensure no error will be printed when it is used.

How do I subclass/override Ember.Logger?

I am implementing remote logging ability in my Ember app, where I want to push everything that gets sent to the log console to a remote logging service (e.g. Loggly).
I believe that what I need to do is override Ember.Logger's methods to redirect log output to the remote logging service, but I can't figure out how to do that.
The documentation for Ember.Logger simply states:
Override this to provide more robust logging functionality.
How do I "override this"? I've tried doing Ember.Logger.reopenClass() and it complains with Ember.Logger.reopenClass is not a function.
Where would I do this? In an initializer? In a service? Other?
Ember.Logger is not an Ember class. It's just an object with some methods on it.
You can override it by something like
Ember.Logger.log = function(...
You can put this wherever you want. I might put it at the top of app.js.
Expanding upon and updating #user663031 response...
As of Nov 2017, the status of Ember.Logger is up in the air. It was not included in Ember's module API, and there isn't yet an RFC for the future.
It is possible use a debug utility directly, e.g. ember-debug-logger, and extend those prototypes separate from Ember.Logger.
However, I opted to overwrite Ember.Logger directly because it allows me to include any logging tool that I like (as opposed to debug util) without having to modify the log statements scattered throughout the code.
As I use bunyan on the backend, opted to log with browser-bunyan, which incidentally has the same info, warn, error as Ember.Logger.
YMMV, but this is the minimal example that worked for me...
// app/app.js
import LOG from './logger-bunyan';
if (config.APP.LOG_BUNYAN) {
Ember.Logger = LOG;
}
// app/logger-bunyan.js
import bunyan from 'npm:browser-bunyan';
const LOG = bunyan.createLogger({
name: 'emberApplication',
});
export default LOG;
// config/environment.js
if (environment === 'development') {
ENV.APP.LOG_BUNYAN = true;
}
// app/component/WhereIWantToLog.js
Logger.warn('bunyan logged warning message')

How to enable DEBUG level logging with Jetty embedded?

I'm trying to set the logging level to DEBUG in an embedded Jetty instance.
The documentation at http://docs.codehaus.org/display/JETTY/Debugging says to -
call SystemProperty.set("DEBUG", "true") before calling new
org.mortbay.jetty.Server().
I'm not sure what the SystemProperty class is, it doesn't seem to be documented anywhere. I tried System.setProperty(), but that didn't do the trick.
My question was answered on the Jetty mailing list by Joakim Erdfelt:
You are looking at the old Jetty 6.x docs at docs.codehaus.org.
DEBUG logging is just a logging level determined by the logging
implementation you choose to use.
If you use slf4j, then use slf4j's docs for configuring logging level. http://slf4j.org/manual.html
If you use java.util.logging, use the JVM docs. http://docs.oracle.com/javase/6/docs/technotes/guides/logging/overview.html
If you use the built-in StdErrLog, then there is a pattern to follow.
-D{classref}.LEVEL={level}
Where {classref} is the class reference you want to set the level on,
and all sub-class refs. and {level} is one of the values ALL, DEBUG,
INFO, WARN
Example:
-Dorg.eclipse.jetty.LEVEL=INFO - this will enable INFO level logging for all jetty packages / classes.
-Dorg.eclipse.jetty.io.LEVEL=DEBUG - this will enable DEBUG level logging for IO classes only
-Dorg.eclipse.jetty.servlet.LEVEL=ALL - this will enable ALL logging (trace events, internally ignored exceptions, etc..) for servlet
packages.
-Dorg.eclipse.jetty.util.thread.QueuedThreadPool.LEVEL=ALL - this will enable level ALL+ on the specific class only.
Add this
-Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog
-Dorg.eclipse.jetty.LEVEL=DEBUG
In case you just want to quickly get log messages to stderr add something like this to java command line:
-Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.StdErrLog -D{classref}.LEVEL=DEBUG
You can use this snippet to enable logging:
import org.eclipse.jetty.util.log.Log;
import org.eclipse.jetty.util.log.StdErrLog;
.
.
.
StdErrLog logger = new StdErrLog();
logger.setDebugEnabled(true);
Log.setLog(logger);