In my django app, I have defined the logging configuration:
default_config = {
'handlers': handlers_to_use,
'level': 'WARN',
}
LOGGING: Dict[str, Any] = {
'version': 1,
'disable_existing_loggers': False,
'handlers': handler_configs,
'root': default_config,
'loggers': {
'': default_config
}
}
So you can see Im using the unnamed logger '' and the root logger, which should set the default logging level to WARN. However, there are some packages (factory_boy and PIL) that are giving me DEBUG logs, which doesnt make sense because WARN should only give me ERROR and WARN logs, based on the heirarchy.
How are they overriding the default? If I add factory and PIL to the list of loggers, things work correctly, but Im wondering why the unnamed nor the root logger doesnt catch the debug logs
Any help would be greatly appreciated
You're still getting the loggers that are defined elsewhere, because of this line:
'disable_existing_loggers': False
The other packages like factory_boy and PIL are not overriding the default. You are just not overriding them.
If you disable the existing loggers, it's then on you to define everything yourself. You'll only get whatever's configured in your config.
Related
I tried to find this, but I'm probably not searching the right way. When I run my Django app, I get a record printed to the error output every time an endpoint is called, similar to:
[20/Jun/2019 09:45:37] "GET /analyst/run_session/ HTTP/1.1" 200 1271
The problem is, I have a call set up every second to refresh data from the api, so my console is being flooded by these entries.
I have tried setting DEBUG=False, and setting MESSAGE_LEVEL=message_constants.ERROR, but it doesn't seem to suppress these entries. Is there something obvious I'm missing?
This is done via logging configs, you will need to override the config for the django.server logger.
For example:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'loggers': {
'django.server': {
'level': 'ERROR'
}
}
}
I want to log all my database changes being done from the application and not only from the django-admin. how can i achieve that ? Currently we can only see the history in django admin for the changes done through the admin interface. do i need to define signals for this?
In settings.py, we have to enable logging. Put this code in your settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': True,
},
},
}
Django documentation for logging - https://docs.djangoproject.com/en/1.11/topics/logging/#django-db-backends
Django admin uses LogEntry.objects.log_action to record those history changes. There's nothing stopping you from calling that same method in your own code to record changes made elsewhere.
You can use a pre_save signal to get the object before committing to the database and then fetch the old values from the database to compare and check for changes.
The formatting of the message can be a plain string, but the admin puts it in a JSON format so it can be translated. You can look at the source for construct_change_message is django.contrib.admin.utils to figure out that JSON format if you want to continue using that for the ManyToManyField, etc.
There are two types of changes possible.
If you are concerned with the structural changes in the database, they are anyways saved in the migrations folder inside your app directory.
If you want to log DB changes in terms of entries made in the database, you might find the python package django-audit-log useful. You can install it via pip, and once installed, you can add trackers to your models by doing something like this:
from audit_log.models.managers import AuditLog
class YourModelName(models.Model):
#your model definition here
audit_log = AuditLog()
You can find the docs here
Another alternative is django-reversion which allows you to do version control for model instances.
Hope this helps!
e.g. in get_response() at https://github.com/django/django/blob/master/django/core/handlers/base.py#L133
there is a
logger.warning('Not Found: %s', request.path,
extra={
'status_code': 404,
'request': request
})
...which seems to log something every time a request 404's I think.
This is clogging up my logs as (for instance) RSS bots crawl some old, non-working URLs on my site
I'd like to stop the logging noise, so I've tried something like the below in my LOGGING config in settings.py.
LOGGING = {
...
'loggers': {
...
'django.core.handlers': {
'handlers': ['app_logs'],
'propagate': False,
'level': 'ERROR'
},
}
Here, I'm trying to quiet the logger.warning by setting the module log level of django.core.handlers to ERROR, but it seems to be not working. Anyone know what to check or do?
Maybe I'm missing something obvious or perhaps flat out doing it wrong hmmm
The correct logger name is django.request.
Django (the python web framework) uses python's logging system to store logs.
Is there an easy way to store log messages in a database, and then allow admin users to look over them via the web? It's the sort of thing I could write myself, but no point re-inventing the wheel. I don't want to log exceptions, but info/debug/notice type messages that I have added to the code.
Ideally I'd like to be able to store metadata about the log message as it's done (like the remote IP address, user agent, wsgi process id, etc.), and then filter / browse based on that (i.e. show me all log messages from this IP address in the last 24 hours). Has anyone done this?
Just use Sentry. Raven, the Django piece of the functionality hooks into the logging framework, so in addition to collecting errors from your app, any custom log messages should show up as well.
Apart from the obvious choice of Sentry, for the sake of exercise, there is a nice blog article titled "Creating custom log handler that logs to database models in django" in the "James Lin Blog", which briefly explains how to do this using a second database, with code samples.
The code is adapted from the standard Python RotatingFileHandler:
...The RotatingFileHandler allows you to specify which file to write to and rotate files, therefore my DBHandler should also allow you to specify which model to insert to and specify expiry in settings, and best of all, on a standalone app.
This could also be easily adapted for using a single db.
check django-db-logger
it takes less than a minute to integrate
https://github.com/CiCiUi/django-db-logger
try django-requests. I've tried it and it basically just puts the request logs in a table called requests.
You can check a good solution that I posted here. You just need a string-connection to connect to your database. For example, if you use a MySQL, the connection-string should be:
# mysqlclient
'mysql+mysqldb://username:password#host:port/database'
or
# PyMySQL
'mysql+pymysql://username:password#host:port/database')
then you can use PhpMyAdmin as a "MySQL web administration tool" to look over the database via web browsers or DataGrip (my preference) to access any database remotely.
for using the handler in Django you just need to add the handler class to the LOGGING variable of setting.py as follow:
level = 'INFO' if DEBUG else 'WARNING' # I prefer INFO in debugging mode and WARNING in production
handler = ['log_db_handler', ] # In production I rarely check the server to see console logs
if DEBUG:
handler.append('console')
LOGGING = {'version': 1,
'disable_existing_loggers': False,
'formatters': {'verbose': {'format': '{levelname} {message}', # {asctime} {module} {process:d} {thread:d}
'style': '{', }, },
'handlers': {'log_db_handler': {'level': level,
'class': 'db_logger.handlers.DBHandler',
'formatter': 'verbose', },
'console': {'class': 'logging.StreamHandler', }},
'loggers': {'db_log': {'handlers': handler,
'level': level,
'propagate': False, },
'django': {'handlers': handler,
'level': level,
'propagate': True, },
'django.request': {'handlers': handler,
'level': level,
'propagate': True, }}}
Pay attention that the 'db_logger.handlers.DBHandler' points to the handler class.
I've got error emails setup via Django's logging mechanism in 1.3. It sends me a nice email when an error happens. However, when I log a simple error message it's being formatted oddly and I'm not sure why.
For example, there's a condition in my app where if something doesn't exist in the DB I want to know about, but I have a suitable default value that will work fine. Thus, I want an email sent to me with some info; it's not necessarily happening on an Exception.
If I do something like this:
logger.error("fee did not exist in the database for action %s", "actionX")
The information in the logfile is fine, but the email is really lacking some information. Here's the subject line:
[Django] ERROR: Test this jazz %s
And then the body:
None
Request repr() unavailable
My question is, how do I get A) the value to show up in the subject and B) get some actual, relevant information in the body....like line number or something like that.
You need to acknowledge two things:
You want to send an email using Python's builtin logging system and
You are not logging a regular exception so the builtin mechanism for sending emails won't work since it's depending on an exception type-like object to be passed and stuff to be stored in a traceback.
Anyways, not impossible!
LOGGING = {
...
'handlers': {
...
'my_special_mail_handler': {
'level': 'ERROR',
'filters': [],
'class': 'myapp.loggers.MyEmailHandler',
'include_html': False,
},
},
'loggers': {
...
'my_special_logger': {
'handlers': ['console', 'my_special_mail_handler'],
'level': 'DEBUG',
'propagate': True,
},
}
}
MY_RECIPIENTS = (("Name of person", "email#example.com"),)
...that stuff is merged into your settings.
Then, there's your special logging class, MyEmailHandler:
from django.utils.log import AdminEmailHandler
from django.core.mail.message import EmailMultiAlternatives
from django.conf import settings
class MyEmailHandler(AdminEmailHandler):
def emit(self, record):
if not getattr(settings, "MY_RECIPIENTS", None):
return
subject = self.format_subject(record.getMessage())
message = getattr(record, "email_body", record.getMessage())
mail = EmailMultiAlternatives(u'%s%s' % (settings.EMAIL_SUBJECT_PREFIX, subject),
message, settings.SERVER_EMAIL, [a[1] for a in settings.MY_RECIPIENTS],)
mail.send(fail_silently=False)
Now you're able to create a special logging entry that's both emailed and output to the terminal this way:
import logging
logger = logging.getLogger("my_special_logger")
error = logger.makeRecord(
logger.name, logging.ERROR, 0, 0,
u"Subject: Some error occured",
None, None, "", None,
)
error.email_body = msg
logger.handle(error)
and to make stuff easy, use a utility function:
import logging
def my_log(msg, body, level=logging.ERROR):
logger = logging.getLogger("my_special_logger")
error = logger.makeRecord(
logger.name, level, 0, 0,
u"Subject: Some error occured",
None, None, "", None,
)
error.email_body = msg
logger.handle(error)
The Django Logging docs state:
Of course, it isn't enough to just put logging calls into your code. You also need to configure the loggers, handlers, filters and formatters to ensure that logging output is output in a useful way.
You need to tweak your settings in your logging.config() dictionary. That allows you to set out exactly what information you want and how to format.
Cutting a small bit from the docs:
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
Gives you a bit of an idea how you can influence the output, and the logging.config docs for python will fill out the available options.