ansible.cfg file hijack my django project log file output path - django

I have a strange django log file output problem, I use the ansible 2.5.0 module in my django 1.11.11 project like this from ansible.plugins.callback import CallbackBase, and the log_path setting in the /etc/ansible/ansible.cfg file actually takes effect for my django project log file output, like a hijack:
# /etc/ansible/ansible.cfg file
# logging is off by default unless this path is defined
# if so defined, consider logrotate
log_path = /var/log/ansible.log
All my django log output to the /var/log/ansible.log which is quite odd
# /var/log/ansible.log
2019-01-07 17:49:22,271 django.server "GET /docs/ HTTP/1.1" 200 1391207
2019-01-07 17:49:23,262 django.server "GET /docs/schema.js HTTP/1.1" 200 111440
I did set up the LOGGING in my django settings, the django setting takes effect too, and the output is like this:
# /var/log/django_debug.log
"GET /docs/ HTTP/1.1" 200 1391207
"GET /docs/schema.js HTTP/1.1" 200 111440
It will be two log files for the same django project in same log level I defined in the django settings:
# django settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': '/var/log/django_debug.log',
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'INFO',
'propagate': True,
},
},
}

Did you import Ansible as a Python module by any chance?
TLDR: When you import Ansible as a module, the logger instance inside Ansible lib gets instantiated first and you end up using Ansible's logger, unless you set disable_existing_loggers to True.
Why this happens?
Python logging is a singleton, which means that only one object of logger can exist in the memory in given process.
Libraries (such as Ansible) can also use loggers internally. In my case, the Ansible's logger was instantiated somewhere in the Ansible global scope. https://github.com/ansible/ansible/blob/devel/lib/ansible/utils/display.py#L65
When you import library, the logger is already instantiated, so your own logger will inherit the existing log handler and you'll end up writing logs to both log files.
What to do to prevent this?
Please try switching the disable_existing_loggers setting to True.
Django docs for logging: https://docs.djangoproject.com/en/2.1/topics/logging/#configuring-logging
Analogical settings for Python logger itself: https://docs.python.org/3/library/logging.config.html?highlight=disable_existing_loggers#logging.config.fileConfig
One note for my case: I used logging.handlers.WatchedFileHandler which seems to inherit previous loggers by default.
Please give feedback if this fixed your problem!

Related

Custom Error Reporting Capability in Django

I have setup the ADMINS variable in settings.py. I've configured my email and everything works as intended. I get a very good, detailed error message from my production box when it encounters an error. I also understand the MANAGERS variable and how it can handle 404 errors and send alerts. OK, fine...
However, I have another use case whereas I'd like to send the error report (standard 500 detailed report) to a set of individuals based on the app the error occurs in (or a default if outside a specific app). So, let's say I have 1 person developing/supporting webapp1 and 2 supporting webapp2. If the error occurs in webapp2, I only want to send to those 2 and if in webapp1 the person developing/supporting that app. If the error doesn't occur in a specific app, I'd send to everyone defined in ADMINS variable.
First off, is there a better way to address this use-case than through custom error process?
and..
Secondly, is this possible within the custom error capability?
Django uses Python's Standard Logging Module; probably should implicitly answer how they implement customized
logging behavior. Python has excellent logging documentation, cookbooks, and how-tos.
Q. How to send application-specific logs only to its maintainers?
The example below won't go into the specifics of Loggers, LogRecords, Formatters, Handlers, or Django's logging behavior.
LogRecords are are propagated to their parent directory. Developer i.e __author__ maintaining __package__
needs to write a project_level logger on the settings.py file. Assuming tutorial is the package a developer is maintaining.
.
├── manage.py
├── requirements.txt
└── tutorial
├── asgi.py
├── __init__.py
├── settings.py
├── urls.py
└── wsgi.py
1 directory, 7 files
Focusing on the handlers, django's loggers, and maybe filters for brevity.
├── settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
"""
another alternate handler inside your package.
Where custom behavior is implemented. key, value pair
dictionary is given. Option to pass extra parameter is the
handler class needs one. """
|. # the logRecords will propapagte up
|. # to it's parent dirs. Your
|. # project root where you'll handle it.
|...project/
|... __init__.py
|... logging.py
|... class NinjalFilter:
|... raise NotImplementedError
'Ninja': {
'()': 'project.logging.NinjalFilter',
'key': 'value',
},
},
'handler': {
'mail_admins': {
'level': 'ERROR', # 4XX (ERROR) or 5XX (Warning) Your Wish
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['Ninja']
},
},
'logger': {
"""
Django introduced loggers classes Not in the standard Library.
https://docs.djangoproject.com/en/4.0/ref/logging/#default-logging-configuration
"""
'YouCustomLogger': {
'handlers': ['mail_admins', 'NinjaProjectMaintainerAdmins'],
'level': 'INFO', # Again 4XX (ERROR) or 5XX (Warning) Your Wish
'filters': ['Ninja']
}
}
}
CustomNinjaLogger which seems to be passing your special Ninja Filtered LogRecords To handlers main_admins and Custom NinjaProjectMaintainerAdminsHandler. Note you might not need the custom handler, NinjaProjectMaintainerAdminHandler, if builtin mail_admins does the job.
I'm working on an open-source async logger -- which makes logging easier. I could use feature requests, suggestions. Have a good day. Hope The answer helped.

App Engine stackdriver logging to Global log instead of service log

I'm trying to set up logging for a django app hosted as an App Engine service on GAE.
I have set up the logging succesfully, except that the logging is showing up in the global log for that entire project instead of the log for that service. I would like for the logs to show up only in the specific service logs
this is my django logging config:
from google.cloud import logging as google_cloud_logging
log_client = google_cloud_logging.Client()
log_client.setup_logging()
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client
},
},
'loggers': {
'': {
'handlers': ['stackdriver_logging'],
'level': 'INFO',
}
},
}
And I am able to succesfully log to the Global project log by calling like this:
def fetch_orders(request):
logger.error('test error')
logger.critical('test critical')
logger.warning('test warning')
logger.info('test info')
return redirect('dashboard')
I'm wanting to figure out if I can configure the logger to always use the log for the service that it's running in.
EDIT:
I tried the suggestion below, however now it is returning the following error:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/google/cloud/logging/handlers/transports/background_thread.py", line 122, in _safely_commit_batch
batch.commit()
File "/env/lib/python3.7/site-packages/google/cloud/logging/logger.py", line 381, in commit
entries = [entry.to_api_repr() for entry in self.entries]
File "/env/lib/python3.7/site-packages/google/cloud/logging/logger.py", line 381, in <listcomp>
entries = [entry.to_api_repr() for entry in self.entries]
File "/env/lib/python3.7/site-packages/google/cloud/logging/entries.py", line 318, in to_api_repr
info = super(StructEntry, self).to_api_repr()
File "/env/lib/python3.7/site-packages/google/cloud/logging/entries.py", line 241, in to_api_repr
info["resource"] = self.resource._to_dict()
AttributeError: 'ConvertingDict' object has no attribute '_to_dict'
I can over-ride this in the package source code to make it work, however the GAE environment requires that I use the package as supplied by google for the cloud logging. Is there any way to go from here?
To my understanding, it should be possible to accomplish what you want using the resource option of CloudLoggingHandler. In the Stackdriver Logging (and Stackdriver Monitoring) API, each object (log line, time-series point) is associated with a "resource" (some thing that exists in a project, that can be provisioned, and can be the source of logs or time-series or the thing that the logs or time-series are being written about). When the resource option is omitted, the CloudLoggingHandler defaults to global as you have observed.
There are a number of monitored resource types, including gae_app, which can be used to represent a specific version of a particular service that is deployed on GAE. Based on your code, this would look something like:
from google.cloud.logging import resource
def get_monitored_resource():
project_id = get_project_id()
gae_service = get_gae_service()
gae_service_version = get_gae_service_version()
resource_type = 'gae_app'
resource_labels = {
'project_id': project_id,
'module_id': gae_service,
'version_id': gae_service_version
}
return resource.Resource(resource_type, resource_labels)
GAE_APP_RESOURCE = get_monitored_resource()
LOGGING = {
# ...
'handlers': {
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client,
'resource': GAE_APP_RESOURCE,
},
},
# ...
}
In the code above, the functions get_project_id, get_gae_service, and get_gae_service_version can be implemented in terms of the environment variables GOOGLE_CLOUD_PROJECT, GAE_SERVICE, and GAE_VERSION in the Python flexible environment as documented by The Flexible Python Runtime as in:
def get_project_id():
return os.getenv('GOOGLE_CLOUD_PROJECT')

concise way to change django console log level

I just want to be able to override the console log level in the settings file. I read the django logging document, but I'm having trouble making the logging do what I want. The documentation assures me that:
"From Django 1.5 forward, the project’s logging configuration is merged with Django’s defaults, hence you can decide if you want to add to, or replace the existing configuration. To completely override the default configuration, set the disable_existing_loggers key to True in the LOGGING dictConfig. Alternatively you can redefine some or all of the loggers."
So I tried just adding the following to my settings.py:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console':{
'level': 'DEBUG',
},
},
}
...but I get an exception:
<snip>
File "/usr/lib/python2.7/logging/config.py", line 575, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'console': 'NoneType' object has no attribute 'split'
Fair enough. It seems to want the whole configuration block. So I tried what I thought would be the simplest console logger config:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'console':{
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
},
'loggers': {
'default': {
'handlers': ['console'],
'level': 'INFO',
'filters': []
}
}
}
My intention is to set the log-level to INFO, but I still see a bunch of DEBUG messages, and the string MYFORMATTER doesn't appear in any of them anyway.
Finally, with blind optimism, I attempted this:
from django.utils.log import DEFAULT_LOGGING
DEFAULT_LOGGING['handlers']['console']['level'] = 'INFO'
I must be missing something quite obvious here.
BTW, I'm using Django 1.5.1.
Answering my own question here, I ended up going with the following in the settings.py file:
import logging
logging.basicConfig(
level = logging.INFO,
format = " %(levelname)s %(name)s: %(message)s",
)
You can do this to set the django logging level :
import logging
logging.disable(logging.INFO)
logging.disable(lvl)
Provides an overriding level lvl for all loggers which takes precedence over the
logger’s own level. When the need arises to
temporarily throttle logging output down across the whole application,
this function can be useful. Its effect is to disable all logging
calls of severity lvl and below, so that if you call it with a value
of INFO, then all INFO and DEBUG events would be discarded, whereas
those of severity WARNING and above would be processed according to
the logger’s effective level. To undo the effect of a call to
logging.disable(lvl), call logging.disable(logging.NOTSET).
http://docs.python.org/2/library/logging.html#logging.disable
I had a similar issue- I was missing a "class" from one of my handlers.

Simple Log to File example for django 1.3+

The release notes say:
Django 1.3 adds framework-level
support for Python’s logging module.
That's nice. I'd like to take advantage of that. Unfortunately the documentation doesn't hand it all to me on a silver platter in the form of complete working example code which demonstrates how simple and valuable this is.
How do I set up this funky new feature such that I can pepper my code with
logging.debug('really awesome stuff dude: %s' % somevar)
and see the file "/tmp/application.log" fill up with
18:31:59 Apr 21 2011 awesome stuff dude: foobar
18:32:00 Apr 21 2011 awesome stuff dude: foobar
18:32:01 Apr 21 2011 awesome stuff dude: foobar
What's the difference between the default Python logging and this 'framework-level support'?
I truly love this so much here is your working example! Seriously this is awesome!
Start by putting this in your settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'standard': {
'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt' : "%d/%b/%Y %H:%M:%S"
},
},
'handlers': {
'null': {
'level':'DEBUG',
'class':'django.utils.log.NullHandler',
},
'logfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'filename': SITE_ROOT + "/logfile",
'maxBytes': 50000,
'backupCount': 2,
'formatter': 'standard',
},
'console':{
'level':'INFO',
'class':'logging.StreamHandler',
'formatter': 'standard'
},
},
'loggers': {
'django': {
'handlers':['console'],
'propagate': True,
'level':'WARN',
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'MYAPP': {
'handlers': ['console', 'logfile'],
'level': 'DEBUG',
},
}
}
Now what does all of this mean?
Formaters I like it to come out as the same style as ./manage.py runserver
Handlers - I want two logs - a debug text file, and an info console. This allows me to really dig in (if needed) and look at a text file to see what happens under the hood.
Loggers - Here is where we nail down what we want to log. In general django gets WARN and above - the exception (hence propagate) is the backends where I love to see the SQL calls since they can get crazy.. Last is my app were I have two handlers and push everything to it.
Now how do I enable MYAPP to use it...
Per the documentation put this at the top of your files (views.py)..
import logging
log = logging.getLogger(__name__)
Then to get something out do this.
log.debug("Hey there it works!!")
log.info("Hey there it works!!")
log.warn("Hey there it works!!")
log.error("Hey there it works!!")
Log levels are explained here and for pure python here.
Based partially on the logging config suggested by rh0dium and some more research I did myself, I started assembling an example Django project with nice logging defaults – fail-nicely-django.
Sample logfile output:
2016-04-05 22:12:32,984 [Thread-1 ] [INFO ] [djangoproject.logger] This is a manually logged INFO string.
2016-04-05 22:12:32,984 [Thread-1 ] [DEBUG] [djangoproject.logger] This is a manually logged DEBUG string.
2016-04-05 22:12:32,984 [Thread-1 ] [ERROR] [django.request ] Internal Server Error: /
Traceback (most recent call last):
File "/Users/kermit/.virtualenvs/fail-nicely-django/lib/python3.5/site-packages/django/core/handlers/base.py", line 149, in get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/kermit/.virtualenvs/fail-nicely-django/lib/python3.5/site-packages/django/core/handlers/base.py", line 147, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/kermit/projekti/git/fail-nicely-django/djangoproject/brokenapp/views.py", line 12, in brokenview
raise Exception('This is an exception raised in a view.')
Exception: This is an exception raised in a view.
The detailed usage is explained in the README, but essentially, you copy the logger module to your Django project and add from .logger import LOGGING at the bottom of your settings.py.

How to view corresponding SQL query of the Django ORM's queryset?

Is there a way I can print the query the Django ORM is generating?
Say I execute the following statement: Model.objects.filter(name='test')
How do I get to see the generated SQL query?
Each QuerySet object has a query attribute that you can log or print to stdout for debugging purposes.
qs = Model.objects.filter(name='test')
print(qs.query)
Note that in pdb, using p qs.query will not work as desired, but print(qs.query) will.
If that doesn't work, for old Django versions, try:
print str(qs.query)
Edit
I've also used custom template tags (as outlined in this snippet) to inject the queries in the scope of a single request as HTML comments.
You also can use python logging to log all queries generated by Django. Just add this to your settings file.
LOGGING = {
'disable_existing_loggers': False,
'version': 1,
'handlers': {
'console': {
# logging handler that outputs log messages to terminal
'class': 'logging.StreamHandler',
'level': 'DEBUG', # message level to be written to console
},
},
'loggers': {
'': {
# this sets root level logger to log debug and higher level
# logs to console. All other loggers inherit settings from
# root level logger.
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False, # this tells logger to send logging message
# to its parent (will send if set to True)
},
'django.db': {
# django also has database level logging
'level': 'DEBUG'
},
},
}
Another method in case application is generating html output - django debug toolbar can be used.
You can paste this code on your Django shell which will display all the SQL queries:
import logging
l = logging.getLogger('django.db.backends')
l.setLevel(logging.DEBUG)
l.addHandler(logging.StreamHandler())
As long as DEBUG is on:
from django.db import connection
print(connection.queries)
For an individual query, you can do:
print(Model.objects.filter(name='test').query)
Maybe you should take a look at django-debug-toolbar application, it will log all queries for you, display profiling information for them and much more.
A robust solution would be to have your database server log to a file and then
tail -f /path/to/the/log/file.log
If you are using database routing, you probably have more than one database connection.
Code like this lets you see connections in a session.
You can reset the stats the same way as with a single connection: reset_queries()
from django.db import connections,connection,reset_queries
...
reset_queries() # resets data collection, call whenever it makes sense
...
def query_all():
for c in connections.all():
print(f"Queries per connection: Database: {c.settings_dict['NAME']} {c.queries}")
# and if you just want to count the number of queries
def query_count_all()->int:
return sum(len(c.queries) for c in connections.all() )
You can use a Django debug_toolbar to view the SQL query.
Step by step guide for debug_toolbar usage :
Install the Debug_toolbar
pip install django-debug-toolbar
Edit settings.py file & add debug_toolbar to Installed apps, this should be added below to 'django.contrib.staticfiles'. Also add debug_toolbar to Middleware.
Settings.py=>
INSTALLED_APPS= [ 'debug_toolbar']
MIDDLEWARE = ['debug_toolbar.middleware.DebugToolbarMiddleware']
create a new list named INTERNAL_IPS in settings.py file
Settings.py=> create new list at the end of settings.py file & add below list:
INTERNAL_IPS= [127.0.0.1']
This will allow the debug to run only on internal developement server
Edit urls.py file of #Project & add below code:
if settings.DEBUG:
import debug_toolbar
urlpatterns = [
url(r'^__debug__/', include(debug_toolbar.urls))
] + urlpatterns
apply migrate & run server again
You will see an add-on on your web page at 127.0.0.1 & if you click on SQL Query check box, you can actually see the run time of query as well.