I have setup the ADMINS variable in settings.py. I've configured my email and everything works as intended. I get a very good, detailed error message from my production box when it encounters an error. I also understand the MANAGERS variable and how it can handle 404 errors and send alerts. OK, fine...
However, I have another use case whereas I'd like to send the error report (standard 500 detailed report) to a set of individuals based on the app the error occurs in (or a default if outside a specific app). So, let's say I have 1 person developing/supporting webapp1 and 2 supporting webapp2. If the error occurs in webapp2, I only want to send to those 2 and if in webapp1 the person developing/supporting that app. If the error doesn't occur in a specific app, I'd send to everyone defined in ADMINS variable.
First off, is there a better way to address this use-case than through custom error process?
and..
Secondly, is this possible within the custom error capability?
Django uses Python's Standard Logging Module; probably should implicitly answer how they implement customized
logging behavior. Python has excellent logging documentation, cookbooks, and how-tos.
Q. How to send application-specific logs only to its maintainers?
The example below won't go into the specifics of Loggers, LogRecords, Formatters, Handlers, or Django's logging behavior.
LogRecords are are propagated to their parent directory. Developer i.e __author__ maintaining __package__
needs to write a project_level logger on the settings.py file. Assuming tutorial is the package a developer is maintaining.
.
├── manage.py
├── requirements.txt
└── tutorial
├── asgi.py
├── __init__.py
├── settings.py
├── urls.py
└── wsgi.py
1 directory, 7 files
Focusing on the handlers, django's loggers, and maybe filters for brevity.
├── settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
"""
another alternate handler inside your package.
Where custom behavior is implemented. key, value pair
dictionary is given. Option to pass extra parameter is the
handler class needs one. """
|. # the logRecords will propapagte up
|. # to it's parent dirs. Your
|. # project root where you'll handle it.
|...project/
|... __init__.py
|... logging.py
|... class NinjalFilter:
|... raise NotImplementedError
'Ninja': {
'()': 'project.logging.NinjalFilter',
'key': 'value',
},
},
'handler': {
'mail_admins': {
'level': 'ERROR', # 4XX (ERROR) or 5XX (Warning) Your Wish
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['Ninja']
},
},
'logger': {
"""
Django introduced loggers classes Not in the standard Library.
https://docs.djangoproject.com/en/4.0/ref/logging/#default-logging-configuration
"""
'YouCustomLogger': {
'handlers': ['mail_admins', 'NinjaProjectMaintainerAdmins'],
'level': 'INFO', # Again 4XX (ERROR) or 5XX (Warning) Your Wish
'filters': ['Ninja']
}
}
}
CustomNinjaLogger which seems to be passing your special Ninja Filtered LogRecords To handlers main_admins and Custom NinjaProjectMaintainerAdminsHandler. Note you might not need the custom handler, NinjaProjectMaintainerAdminHandler, if builtin mail_admins does the job.
I'm working on an open-source async logger -- which makes logging easier. I could use feature requests, suggestions. Have a good day. Hope The answer helped.
Related
I'm trying to set up logging for a django app hosted as an App Engine service on GAE.
I have set up the logging succesfully, except that the logging is showing up in the global log for that entire project instead of the log for that service. I would like for the logs to show up only in the specific service logs
this is my django logging config:
from google.cloud import logging as google_cloud_logging
log_client = google_cloud_logging.Client()
log_client.setup_logging()
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client
},
},
'loggers': {
'': {
'handlers': ['stackdriver_logging'],
'level': 'INFO',
}
},
}
And I am able to succesfully log to the Global project log by calling like this:
def fetch_orders(request):
logger.error('test error')
logger.critical('test critical')
logger.warning('test warning')
logger.info('test info')
return redirect('dashboard')
I'm wanting to figure out if I can configure the logger to always use the log for the service that it's running in.
EDIT:
I tried the suggestion below, however now it is returning the following error:
Traceback (most recent call last):
File "/env/lib/python3.7/site-packages/google/cloud/logging/handlers/transports/background_thread.py", line 122, in _safely_commit_batch
batch.commit()
File "/env/lib/python3.7/site-packages/google/cloud/logging/logger.py", line 381, in commit
entries = [entry.to_api_repr() for entry in self.entries]
File "/env/lib/python3.7/site-packages/google/cloud/logging/logger.py", line 381, in <listcomp>
entries = [entry.to_api_repr() for entry in self.entries]
File "/env/lib/python3.7/site-packages/google/cloud/logging/entries.py", line 318, in to_api_repr
info = super(StructEntry, self).to_api_repr()
File "/env/lib/python3.7/site-packages/google/cloud/logging/entries.py", line 241, in to_api_repr
info["resource"] = self.resource._to_dict()
AttributeError: 'ConvertingDict' object has no attribute '_to_dict'
I can over-ride this in the package source code to make it work, however the GAE environment requires that I use the package as supplied by google for the cloud logging. Is there any way to go from here?
To my understanding, it should be possible to accomplish what you want using the resource option of CloudLoggingHandler. In the Stackdriver Logging (and Stackdriver Monitoring) API, each object (log line, time-series point) is associated with a "resource" (some thing that exists in a project, that can be provisioned, and can be the source of logs or time-series or the thing that the logs or time-series are being written about). When the resource option is omitted, the CloudLoggingHandler defaults to global as you have observed.
There are a number of monitored resource types, including gae_app, which can be used to represent a specific version of a particular service that is deployed on GAE. Based on your code, this would look something like:
from google.cloud.logging import resource
def get_monitored_resource():
project_id = get_project_id()
gae_service = get_gae_service()
gae_service_version = get_gae_service_version()
resource_type = 'gae_app'
resource_labels = {
'project_id': project_id,
'module_id': gae_service,
'version_id': gae_service_version
}
return resource.Resource(resource_type, resource_labels)
GAE_APP_RESOURCE = get_monitored_resource()
LOGGING = {
# ...
'handlers': {
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client,
'resource': GAE_APP_RESOURCE,
},
},
# ...
}
In the code above, the functions get_project_id, get_gae_service, and get_gae_service_version can be implemented in terms of the environment variables GOOGLE_CLOUD_PROJECT, GAE_SERVICE, and GAE_VERSION in the Python flexible environment as documented by The Flexible Python Runtime as in:
def get_project_id():
return os.getenv('GOOGLE_CLOUD_PROJECT')
I have a strange django log file output problem, I use the ansible 2.5.0 module in my django 1.11.11 project like this from ansible.plugins.callback import CallbackBase, and the log_path setting in the /etc/ansible/ansible.cfg file actually takes effect for my django project log file output, like a hijack:
# /etc/ansible/ansible.cfg file
# logging is off by default unless this path is defined
# if so defined, consider logrotate
log_path = /var/log/ansible.log
All my django log output to the /var/log/ansible.log which is quite odd
# /var/log/ansible.log
2019-01-07 17:49:22,271 django.server "GET /docs/ HTTP/1.1" 200 1391207
2019-01-07 17:49:23,262 django.server "GET /docs/schema.js HTTP/1.1" 200 111440
I did set up the LOGGING in my django settings, the django setting takes effect too, and the output is like this:
# /var/log/django_debug.log
"GET /docs/ HTTP/1.1" 200 1391207
"GET /docs/schema.js HTTP/1.1" 200 111440
It will be two log files for the same django project in same log level I defined in the django settings:
# django settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': '/var/log/django_debug.log',
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'INFO',
'propagate': True,
},
},
}
Did you import Ansible as a Python module by any chance?
TLDR: When you import Ansible as a module, the logger instance inside Ansible lib gets instantiated first and you end up using Ansible's logger, unless you set disable_existing_loggers to True.
Why this happens?
Python logging is a singleton, which means that only one object of logger can exist in the memory in given process.
Libraries (such as Ansible) can also use loggers internally. In my case, the Ansible's logger was instantiated somewhere in the Ansible global scope. https://github.com/ansible/ansible/blob/devel/lib/ansible/utils/display.py#L65
When you import library, the logger is already instantiated, so your own logger will inherit the existing log handler and you'll end up writing logs to both log files.
What to do to prevent this?
Please try switching the disable_existing_loggers setting to True.
Django docs for logging: https://docs.djangoproject.com/en/2.1/topics/logging/#configuring-logging
Analogical settings for Python logger itself: https://docs.python.org/3/library/logging.config.html?highlight=disable_existing_loggers#logging.config.fileConfig
One note for my case: I used logging.handlers.WatchedFileHandler which seems to inherit previous loggers by default.
Please give feedback if this fixed your problem!
I am having problems in using Sphinx to generate documentation for a Flask app. Without going into specific details of the app its basic structure looks as follows.
__all__ = ['APP']
<python 2 imports>
<flask imports>
<custom imports>
APP = None # module-level variable to store the Flask app
<other module level variables>
# App initialisation
def init():
global APP
APP = Flask(__name__)
<other initialisation code>
try:
init()
except Exception as e:
logger.exception(str(e))
#APP.route(os.path.join(<service base url>, <request action>), methods=["POST"])
<request action handler>
if __name__ == '__main__':
init()
APP.run(debug=True, host='0.0.0.0', port=5000)
I've installed Sphinx in a venv along with other packages needed for the web service, and the build folder is within a docs subfolder which looks like this
docs
├── Makefile
├── _build
├── _static
├── _templates
├── conf.py
├── index.rst
├── introduction.rst
└── make.bat
The conf.py was generated by running sphinx-quickstart and it contains the line
autodoc_mock_imports = [<external imports to ignore>]
to ensure that Sphinx will ignore the listed external imports. The index.rst is standard
.. myapp documentation master file, created by
sphinx-quickstart on Fri Jun 16 12:35:40 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to myapp's documentation!
=============================================
.. toctree::
:maxdepth: 2
:caption: Contents:
introduction
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
and I've added an introduction.rst page to document the app members
===================
`myapp`
===================
Oasis Flask app that handles keys requests.
myapp.app
---------------------
.. automodule:: myapp.app
:members:
:undoc-members:
When I run make html in docs I am getting HTML output in the _build subfolder but I get the following warning
WARNING: /path/to/myapp/docs/introduction.rst:10: (WARNING/2) autodoc: failed to import module u'myapp.app'; the following exception was raised:
Traceback (most recent call last):
File "/path/to/myapp/venv/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 657, in import_object
__import__(self.modname)
File "/path/to/myapp/__init__.py", line 4, in <module>
from .app import APP
File "/path/to/myapp/app.py", line 139, in <module>
#APP.route(os.path.join(<service base url>, <request action>), methods=['GET'])
File "/path/to/myapp/venv/lib/python2.7/posixpath.py", line 70, in join
elif path == '' or path.endswith('/'):
AttributeError: 'NoneType' object has no attribute 'endswith'
and I am not seeing the documentation I am expecting to see for the app members like the request handler and the app init method.
I don't know what the problem is, any help would be appreciated.
Try using sphinx-apidoc to automatically generate Sphinx sources that, using the autodoc extension, document a whole package in the style of other automatic API documentation tools. You will need to add 'sphinx.ext.autodoc' to your list of Sphinx extensions in your conf.py, too.
While working a migration of an older Django project I ran into this error after running:
python manage.py check
cms.UserSettings.language: (fields.E005) 'choices' must be an iterable containing (actual value, human readable name) tuples.
Has anyone run into this issue? Unfortunately I have to wait until I am not on the corp network before I can ask the IRC channels.
http://docs.django-cms.org/en/latest/reference/configuration.html#cms-languages
It turns out I missed this important setting in my settings.py file:
CMS_LANGUAGES = {
'default': {
'fallbacks': ['en',],
'redirect_on_fallback':True,
'public': True,
'hide_untranslated': False,
}
}
Thanks to brianpck for a point in the right direction though.
Is there a way I can print the query the Django ORM is generating?
Say I execute the following statement: Model.objects.filter(name='test')
How do I get to see the generated SQL query?
Each QuerySet object has a query attribute that you can log or print to stdout for debugging purposes.
qs = Model.objects.filter(name='test')
print(qs.query)
Note that in pdb, using p qs.query will not work as desired, but print(qs.query) will.
If that doesn't work, for old Django versions, try:
print str(qs.query)
Edit
I've also used custom template tags (as outlined in this snippet) to inject the queries in the scope of a single request as HTML comments.
You also can use python logging to log all queries generated by Django. Just add this to your settings file.
LOGGING = {
'disable_existing_loggers': False,
'version': 1,
'handlers': {
'console': {
# logging handler that outputs log messages to terminal
'class': 'logging.StreamHandler',
'level': 'DEBUG', # message level to be written to console
},
},
'loggers': {
'': {
# this sets root level logger to log debug and higher level
# logs to console. All other loggers inherit settings from
# root level logger.
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False, # this tells logger to send logging message
# to its parent (will send if set to True)
},
'django.db': {
# django also has database level logging
'level': 'DEBUG'
},
},
}
Another method in case application is generating html output - django debug toolbar can be used.
You can paste this code on your Django shell which will display all the SQL queries:
import logging
l = logging.getLogger('django.db.backends')
l.setLevel(logging.DEBUG)
l.addHandler(logging.StreamHandler())
As long as DEBUG is on:
from django.db import connection
print(connection.queries)
For an individual query, you can do:
print(Model.objects.filter(name='test').query)
Maybe you should take a look at django-debug-toolbar application, it will log all queries for you, display profiling information for them and much more.
A robust solution would be to have your database server log to a file and then
tail -f /path/to/the/log/file.log
If you are using database routing, you probably have more than one database connection.
Code like this lets you see connections in a session.
You can reset the stats the same way as with a single connection: reset_queries()
from django.db import connections,connection,reset_queries
...
reset_queries() # resets data collection, call whenever it makes sense
...
def query_all():
for c in connections.all():
print(f"Queries per connection: Database: {c.settings_dict['NAME']} {c.queries}")
# and if you just want to count the number of queries
def query_count_all()->int:
return sum(len(c.queries) for c in connections.all() )
You can use a Django debug_toolbar to view the SQL query.
Step by step guide for debug_toolbar usage :
Install the Debug_toolbar
pip install django-debug-toolbar
Edit settings.py file & add debug_toolbar to Installed apps, this should be added below to 'django.contrib.staticfiles'. Also add debug_toolbar to Middleware.
Settings.py=>
INSTALLED_APPS= [ 'debug_toolbar']
MIDDLEWARE = ['debug_toolbar.middleware.DebugToolbarMiddleware']
create a new list named INTERNAL_IPS in settings.py file
Settings.py=> create new list at the end of settings.py file & add below list:
INTERNAL_IPS= [127.0.0.1']
This will allow the debug to run only on internal developement server
Edit urls.py file of #Project & add below code:
if settings.DEBUG:
import debug_toolbar
urlpatterns = [
url(r'^__debug__/', include(debug_toolbar.urls))
] + urlpatterns
apply migrate & run server again
You will see an add-on on your web page at 127.0.0.1 & if you click on SQL Query check box, you can actually see the run time of query as well.