concise way to change django console log level - django

I just want to be able to override the console log level in the settings file. I read the django logging document, but I'm having trouble making the logging do what I want. The documentation assures me that:
"From Django 1.5 forward, the project’s logging configuration is merged with Django’s defaults, hence you can decide if you want to add to, or replace the existing configuration. To completely override the default configuration, set the disable_existing_loggers key to True in the LOGGING dictConfig. Alternatively you can redefine some or all of the loggers."
So I tried just adding the following to my settings.py:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console':{
'level': 'DEBUG',
},
},
}
...but I get an exception:
<snip>
File "/usr/lib/python2.7/logging/config.py", line 575, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'console': 'NoneType' object has no attribute 'split'
Fair enough. It seems to want the whole configuration block. So I tried what I thought would be the simplest console logger config:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'console':{
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
},
'loggers': {
'default': {
'handlers': ['console'],
'level': 'INFO',
'filters': []
}
}
}
My intention is to set the log-level to INFO, but I still see a bunch of DEBUG messages, and the string MYFORMATTER doesn't appear in any of them anyway.
Finally, with blind optimism, I attempted this:
from django.utils.log import DEFAULT_LOGGING
DEFAULT_LOGGING['handlers']['console']['level'] = 'INFO'
I must be missing something quite obvious here.
BTW, I'm using Django 1.5.1.

Answering my own question here, I ended up going with the following in the settings.py file:
import logging
logging.basicConfig(
level = logging.INFO,
format = " %(levelname)s %(name)s: %(message)s",
)

You can do this to set the django logging level :
import logging
logging.disable(logging.INFO)
logging.disable(lvl)
Provides an overriding level lvl for all loggers which takes precedence over the
logger’s own level. When the need arises to
temporarily throttle logging output down across the whole application,
this function can be useful. Its effect is to disable all logging
calls of severity lvl and below, so that if you call it with a value
of INFO, then all INFO and DEBUG events would be discarded, whereas
those of severity WARNING and above would be processed according to
the logger’s effective level. To undo the effect of a call to
logging.disable(lvl), call logging.disable(logging.NOTSET).
http://docs.python.org/2/library/logging.html#logging.disable

I had a similar issue- I was missing a "class" from one of my handlers.

Related

Logging in Flask with TimedRotatingFileHandler [duplicate]

This question already has answers here:
Why does running the Flask dev server run itself twice?
(7 answers)
Closed last year.
I think this problem happens only on Windows as Windows doesn't let you rename file while it is open. Not tested on linux though.
I've tried to simplify code to show you issue clearly. The following code generates logfile defined in dictConfig and logs to that file sucessfully. After a minute (rotation interval), code reports an error saying PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: It seems rotate function fails to rename file as it is already opened, so rotation fails. For now, the code logs to one file without rotation, but this will generate a giant log file after a while. Any help to resolve it?
Test code:
from loggerConfig import LOGGING
import logging
from logging.config import dictConfig
from flask import Response
import flask
dictConfig(LOGGING)
app = flask.Flask(__name__)
app.config["DEBUG"] = True
#app.route('/', methods=['GET'])
def home():
amLogger.debug("RX-GET")
return Response("<h1>Alert listener working.</h1> <p>Alert listener working.</p>", status=200)
amLogger = logging.getLogger('alerting')
app.run(host="0.0.0.0", port=5000, threaded=True)
loggerConfig.py:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'default': {
'format': '%(asctime)s — %(name)s — %(levelname)s — %(message)s'
}
},
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.handlers.TimedRotatingFileHandler',
'formatter': 'default',
'filename': "path/to/log-file/alertListener.log",
'when': 'm',
'interval': 1,
'delay': True
}
},
'loggers': {
'': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': False
},
}
}
ok, I figured out the issue. Flask runs itself twice to create child processes for its reloading function. I added following line while invoking flask.run and the problem went away.
use_reloader=False

App Engine stackdriver logging to Global log instead of service log

I'm trying to set up logging for a django app hosted as an App Engine service on GAE.
I have set up the logging succesfully, except that the logging is showing up in the global log for that entire project instead of the log for that service. I would like for the logs to show up only in the specific service logs
this is my django logging config:
from google.cloud import logging as google_cloud_logging
log_client = google_cloud_logging.Client()
log_client.setup_logging()
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client
},
},
'loggers': {
'': {
'handlers': ['stackdriver_logging'],
'level': 'INFO',
}
},
}
And I am able to succesfully log to the Global project log by calling like this:
def fetch_orders(request):
logger.error('test error')
logger.critical('test critical')
logger.warning('test warning')
logger.info('test info')
return redirect('dashboard')
I'm wanting to figure out if I can configure the logger to always use the log for the service that it's running in.
EDIT:
I tried the suggestion below, however now it is returning the following error:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/google/cloud/logging/handlers/transports/background_thread.py", line 122, in _safely_commit_batch
batch.commit()
File "/env/lib/python3.7/site-packages/google/cloud/logging/logger.py", line 381, in commit
entries = [entry.to_api_repr() for entry in self.entries]
File "/env/lib/python3.7/site-packages/google/cloud/logging/logger.py", line 381, in <listcomp>
entries = [entry.to_api_repr() for entry in self.entries]
File "/env/lib/python3.7/site-packages/google/cloud/logging/entries.py", line 318, in to_api_repr
info = super(StructEntry, self).to_api_repr()
File "/env/lib/python3.7/site-packages/google/cloud/logging/entries.py", line 241, in to_api_repr
info["resource"] = self.resource._to_dict()
AttributeError: 'ConvertingDict' object has no attribute '_to_dict'
I can over-ride this in the package source code to make it work, however the GAE environment requires that I use the package as supplied by google for the cloud logging. Is there any way to go from here?
To my understanding, it should be possible to accomplish what you want using the resource option of CloudLoggingHandler. In the Stackdriver Logging (and Stackdriver Monitoring) API, each object (log line, time-series point) is associated with a "resource" (some thing that exists in a project, that can be provisioned, and can be the source of logs or time-series or the thing that the logs or time-series are being written about). When the resource option is omitted, the CloudLoggingHandler defaults to global as you have observed.
There are a number of monitored resource types, including gae_app, which can be used to represent a specific version of a particular service that is deployed on GAE. Based on your code, this would look something like:
from google.cloud.logging import resource
def get_monitored_resource():
project_id = get_project_id()
gae_service = get_gae_service()
gae_service_version = get_gae_service_version()
resource_type = 'gae_app'
resource_labels = {
'project_id': project_id,
'module_id': gae_service,
'version_id': gae_service_version
}
return resource.Resource(resource_type, resource_labels)
GAE_APP_RESOURCE = get_monitored_resource()
LOGGING = {
# ...
'handlers': {
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client,
'resource': GAE_APP_RESOURCE,
},
},
# ...
}
In the code above, the functions get_project_id, get_gae_service, and get_gae_service_version can be implemented in terms of the environment variables GOOGLE_CLOUD_PROJECT, GAE_SERVICE, and GAE_VERSION in the Python flexible environment as documented by The Flexible Python Runtime as in:
def get_project_id():
return os.getenv('GOOGLE_CLOUD_PROJECT')

ansible.cfg file hijack my django project log file output path

I have a strange django log file output problem, I use the ansible 2.5.0 module in my django 1.11.11 project like this from ansible.plugins.callback import CallbackBase, and the log_path setting in the /etc/ansible/ansible.cfg file actually takes effect for my django project log file output, like a hijack:
# /etc/ansible/ansible.cfg file
# logging is off by default unless this path is defined
# if so defined, consider logrotate
log_path = /var/log/ansible.log
All my django log output to the /var/log/ansible.log which is quite odd
# /var/log/ansible.log
2019-01-07 17:49:22,271 django.server "GET /docs/ HTTP/1.1" 200 1391207
2019-01-07 17:49:23,262 django.server "GET /docs/schema.js HTTP/1.1" 200 111440
I did set up the LOGGING in my django settings, the django setting takes effect too, and the output is like this:
# /var/log/django_debug.log
"GET /docs/ HTTP/1.1" 200 1391207
"GET /docs/schema.js HTTP/1.1" 200 111440
It will be two log files for the same django project in same log level I defined in the django settings:
# django settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': '/var/log/django_debug.log',
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'INFO',
'propagate': True,
},
},
}
Did you import Ansible as a Python module by any chance?
TLDR: When you import Ansible as a module, the logger instance inside Ansible lib gets instantiated first and you end up using Ansible's logger, unless you set disable_existing_loggers to True.
Why this happens?
Python logging is a singleton, which means that only one object of logger can exist in the memory in given process.
Libraries (such as Ansible) can also use loggers internally. In my case, the Ansible's logger was instantiated somewhere in the Ansible global scope. https://github.com/ansible/ansible/blob/devel/lib/ansible/utils/display.py#L65
When you import library, the logger is already instantiated, so your own logger will inherit the existing log handler and you'll end up writing logs to both log files.
What to do to prevent this?
Please try switching the disable_existing_loggers setting to True.
Django docs for logging: https://docs.djangoproject.com/en/2.1/topics/logging/#configuring-logging
Analogical settings for Python logger itself: https://docs.python.org/3/library/logging.config.html?highlight=disable_existing_loggers#logging.config.fileConfig
One note for my case: I used logging.handlers.WatchedFileHandler which seems to inherit previous loggers by default.
Please give feedback if this fixed your problem!

How to configure logging handler to produce monthly rotated log files?

I am about to configure a python logging system using the dictconfig format in settings.py as suggested by Django.
For effectiveness, I would like to log entries in monthly splitted log files regardless of number of days in the month (or, in the future, depending of workload of my project, by iso week number). Unfortunately, the python's TimedRotatingFileHandler can't do that.
I had the idea to use the standard FileHandler and dynamically change the filename (customizing it btw).
'fichierMensuelCustom': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': lambda x: 'logs/projet/projet_{0}.log'.format(time.strftime('%m-%Y')),
'formatter': 'complet'
},
(please don't laugh at it)
It doesn't work, I'm stuck… Any suggestions?

Simple Log to File example for django 1.3+

The release notes say:
Django 1.3 adds framework-level
support for Python’s logging module.
That's nice. I'd like to take advantage of that. Unfortunately the documentation doesn't hand it all to me on a silver platter in the form of complete working example code which demonstrates how simple and valuable this is.
How do I set up this funky new feature such that I can pepper my code with
logging.debug('really awesome stuff dude: %s' % somevar)
and see the file "/tmp/application.log" fill up with
18:31:59 Apr 21 2011 awesome stuff dude: foobar
18:32:00 Apr 21 2011 awesome stuff dude: foobar
18:32:01 Apr 21 2011 awesome stuff dude: foobar
What's the difference between the default Python logging and this 'framework-level support'?
I truly love this so much here is your working example! Seriously this is awesome!
Start by putting this in your settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'standard': {
'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt' : "%d/%b/%Y %H:%M:%S"
},
},
'handlers': {
'null': {
'level':'DEBUG',
'class':'django.utils.log.NullHandler',
},
'logfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'filename': SITE_ROOT + "/logfile",
'maxBytes': 50000,
'backupCount': 2,
'formatter': 'standard',
},
'console':{
'level':'INFO',
'class':'logging.StreamHandler',
'formatter': 'standard'
},
},
'loggers': {
'django': {
'handlers':['console'],
'propagate': True,
'level':'WARN',
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'MYAPP': {
'handlers': ['console', 'logfile'],
'level': 'DEBUG',
},
}
}
Now what does all of this mean?
Formaters I like it to come out as the same style as ./manage.py runserver
Handlers - I want two logs - a debug text file, and an info console. This allows me to really dig in (if needed) and look at a text file to see what happens under the hood.
Loggers - Here is where we nail down what we want to log. In general django gets WARN and above - the exception (hence propagate) is the backends where I love to see the SQL calls since they can get crazy.. Last is my app were I have two handlers and push everything to it.
Now how do I enable MYAPP to use it...
Per the documentation put this at the top of your files (views.py)..
import logging
log = logging.getLogger(__name__)
Then to get something out do this.
log.debug("Hey there it works!!")
log.info("Hey there it works!!")
log.warn("Hey there it works!!")
log.error("Hey there it works!!")
Log levels are explained here and for pure python here.
Based partially on the logging config suggested by rh0dium and some more research I did myself, I started assembling an example Django project with nice logging defaults – fail-nicely-django.
Sample logfile output:
2016-04-05 22:12:32,984 [Thread-1 ] [INFO ] [djangoproject.logger] This is a manually logged INFO string.
2016-04-05 22:12:32,984 [Thread-1 ] [DEBUG] [djangoproject.logger] This is a manually logged DEBUG string.
2016-04-05 22:12:32,984 [Thread-1 ] [ERROR] [django.request ] Internal Server Error: /
Traceback (most recent call last):
File "/Users/kermit/.virtualenvs/fail-nicely-django/lib/python3.5/site-packages/django/core/handlers/base.py", line 149, in get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/kermit/.virtualenvs/fail-nicely-django/lib/python3.5/site-packages/django/core/handlers/base.py", line 147, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/kermit/projekti/git/fail-nicely-django/djangoproject/brokenapp/views.py", line 12, in brokenview
raise Exception('This is an exception raised in a view.')
Exception: This is an exception raised in a view.
The detailed usage is explained in the README, but essentially, you copy the logger module to your Django project and add from .logger import LOGGING at the bottom of your settings.py.