I am looking at a way to remove 4xx errors from my newrelic error rate reporting for a Django project.
It's a standard installation of new relic and Django framework.
Any help appreciated for same.
You should be able to update your new relic config file to ignore these errors.
https://docs.newrelic.com/docs/agents/python-agent/configuration/python-agent-configuration#error-collector-settings
Use a filter in your logging settings that changes the level of the 4xx errors. I've done this for 404 errors:
def change_404_level_to_INFO(record):
if record.status_code == 404:
record.levelname = 'INFO'
return True
LOGGING = {
...,
'filters': {
'change_404_to_info': {
'()': 'django.utils.log.CallbackFilter',
'callback': change_404_level_to_INFO,
},
},
'handlers': {
... # your newrelic handler
},
'loggers': {
# Root logger"
'django': {
'handlers': ['newrelic', 'mail_admins'],
'level': 'WARNING',
'propagate': False,
'filters': ['change_404_to_info']
},
}
Related
I deployed django app on IIS, however my logging code that was working perfectly on local host, caused server 500 error...
Can I get any help please?
LOGGING = {
'version': 1,
'loggers': {
'django': {
'handlers': ['debuglog'],
'level': 'DEBUG'
},
'django.server': {
'handlers': ['errorlog'],
'level': 'ERROR'
}
},
'handlers': {
'debuglog': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': './logs/debug.log',
'formatter': 'simple',
},
'errorlog': {
'level': 'ERROR',
'class': 'logging.FileHandler',
'filename': './logs/error.log',
'formatter': 'simple',
}
},
'formatters': {
'simple': {
'format': '{levelname} {message}',
'style': '{',
}
}
}
Maybe IIS does not allow django to create log files and it needs permission to do so? If this is the case, how would I do that?
you can follow the below steps to deploy the Django site in iis:
1)Add the site with your project root path in iis.
2)Select site-> Doble clcik handler mappings. Click “Add Module Mapping…” from the actions selection on the right.
3)Set request path to * and Module to FastCgiModule.
set executable:
C:\Python39\Scripts\python.exeC:\Python39\Lib\site-packages\wfastcgi.py
4)Now back to the server node and select the fast cgi module.
add below environment variable:
DJANGO_SETTINGS_MODULE = “path to your settings”
WSGI_HANDLER = django.core.wsgi.get_wsgi_application()
PYTHONPATH = “path to your repo-root e.g. C:\inetpub\wwwroot\djangosite”
Do not forget to assign iis_iusrs and iusr permission to the site root folder and python folder.
check the log folder permission too.
https://learn.microsoft.com/en-us/visualstudio/python/configure-web-apps-for-iis-windows?view=vs-2019
I have django LOGGING configured with standard "mail admins on 500 errors":
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
When I put site into maintenance mode (django-maintenance-mode), it correctly responds with 503 "Service Unavailable" for anonymous requests. This causes a barrage of emails to admins when the site is in maintenance mode.
I want to "filter out 503 response IF site is in maintenance mode" to stem the flood.
But can't see a simple way to do this (e.g., logging filter would need request to check if site in maintenance mode)
I know I could change the maintenance error code to a 400-level error, but that seems like non-semantic hack. Could also suspend admin emails during maintenance, but that requires remember to hack / revert settings file. Hoping someone has a clever idea how to achieve this simply, no hacks.
You can simply create filter 'require_not_maintenance_mode_503' in LOGGING of your settings.py and add the filter in handlers 'mail_admins'. This will prevent sending email on 503 error. For example:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_not_maintenance_mode_503': {
'()': 'maintenance_mode.logging.RequireNotMaintenanceMode503',
},
},
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['require_not_maintenance_mode_503'],
'formatter': 'simple'
},
},
'loggers': {
'django': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
},
}
Actually, I can suggest a hacky method. You can create a class that inherits the AdminEmailHandler class.
class CustomAdminEmailHandler(AdminEmailHandler):
def send_mail(self, subject, message, *args, **kwargs):
if 'service unavailable' in subject.lower(): # Not sure about the condition, you can find the correct one by debugging.
super().send_email(subject, message, *args, **kwargs)
Don't forget to change the class of mail_admins to new one in settings.
You could just set DEBUG=TRUE while site is in maintenance mode -- I mean, it's right there in your logging config.
DOH!
Still, requires mod'ing settings file and then remembering to mod it back when taking out of maintenance. A bit hacky, but seems pretty simple otherwise.
I have encountered a strange behavior of Django Loggers.
I am developing a front end application using Django. During the login service, I make some requests to certain components and use log.warning() calls to see the flow of the requests.
The logs worked perfectly, until I decided to add a LOGGING configuration to print the output of the logs in a file, as I want to deploy the application via Docker and I want to periodically check the log files.
When I added the following Django configuration concerning logging:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'detailed': {
'class': 'logging.Formatter',
'format': "[%(asctime)s] - [%(name)s:%(lineno)s] - [%(levelname)s] %(message)s",
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'INFO',
'formatter': 'detailed',
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': "{}/am.log".format(BASE_DIR),
'mode': 'w',
'formatter': 'detailed',
'level': 'INFO',
'maxBytes': 2024 * 2024,
'backupCount': 5,
},
},
'loggers': {
'am': {
'level': 'INFO',
'handlers': ['console', 'file']
},
}
}
The logging stops working. The file specified in the logging configuration, am.log, is indeed created but nothing gets printed to this file. Even the console logging does not take place.
I have taken this logging configuration from one of my Django projects for the backend of this application, and there it works perfectly. I really don't understand what I am doing wrong. Could you please help me or guide me in the right direction. I would be very grateful.
I wish you all a good day!
By using the key "am" in your 'loggers' configuration, you're defining one logger with name "am":
'loggers': {
'am': { # <-- name of the logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
So to use that logger, you have to get it by that name:
logger = logging.getLogger("am")
logger.warning("This is a warning")
If you name your loggers by the name of the module in which you're running, which is recommended practice, then you need to define each module logger:
logger = logging.getLogger(__name__) # <-- this logger will be named after the module, e.g. your app name.
Then in your logging configuration you can specify logging behavior per module (per app):
'loggers': {
'my_app': { # <-- logging for my app
'level': 'INFO',
'handlers': ['console', 'file']
},
'django': { # <-- logging for Django module
'level': 'WARNING',
'handlers': ['console', 'file']
},
}
Or if you just want to log everything the same, use the root ('') logger, which doesn't have a name, just empty string:
'loggers': {
'': { # <-- root logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
I'm trying to configure django logging in the django settings file so that it logs django info and info for my application to a custom file for easy viewing. Here's my logging config:
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
'file': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
},
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'formatter': 'file',
'filename': 'logs/django_log.log',
'backupCount': 10, # keep at most 10 log files
'maxBytes': 5242880, # 5*1024*1024 bytes (5MB)
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'console',
},
},
'loggers': {
'django': {
'handlers': ['file', 'console'],
'level': 'INFO',
'propagate': True,
},
'py.warnings': {
'handlers': ['console'],
},
'my_application': {
'level': 'INFO',
'handlers': ['file', 'console'],
# required to avoid double logging with root logger
'propagate': False,
},
},
}
This works on my local manage.py test server, both with django events appearing and events that I log, initialized with my_application as the logger name. However, on my web server, the logging file is created and, oddly, only populated with occasional django WARNING messages. So, there is no permissions error or inability to access the log file. Since the same config works on my local, the config can't be the issue, and it clearly has only INFO level logs.
My server setup is taken from this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04 and uses Gunicorn with Nginx as a reverse proxy. Could the issue be with the configs for these? I'm stumped.
Additionally, where's a good best practice place to store this django log file?
Also, one related bonus question: What's a good best practice free/cheap service that can notify you if a specific error is logged? It seems like a good idea to set something like that up, but I don't think the django emailer is necessarily the most elegant or best.
We have a website with several thousand requests per minute and I'd like to limit the maximum number of error admin mails that Django sends us, in case we have an error in our code.
On my local dev system, the following logging configuration works nicely. It sets a cache value, which expires after 10 seconds. An error that occurs within this time interval will not be reported at all. That's what we want. All of the following code is placed inside our project settings file.
def limit_error_interval(record):
from django.core.cache import cache
if not cache.get('last_error'):
cache.set('last_error', '1', 10) # 10 sec
return True
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'error_limit': {
'()': 'django.utils.log.CallbackFilter',
'callback': limit_error_interval
},
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['error_limit',],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins',],
'level': 'ERROR',
'propagate': True
},
}
}
However, the code fails on our production server (NGINX). limit_error_interval gets called, but every error is still sent by mail. Our cache works within the code, but it appears it's not working inside this function. Is there a problem using cache inside the settings?
This alternative approach/snippet has exactly the same problem: works on local dev system, but not in production.
class ErrorIntervalFilter(object):
def filter(self, record):
from django.core.cache import cache
if not cache.get('last_error'):
cache.set('last_error', '1', 10)
return True
LOGGING = {
'version': 1, 'disable_existing_loggers': False,
'handlers': {'mail_admins': {'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'filters': ['error_interval'],}},
'filters': {'error_interval': {'()': ErrorIntervalFilter,}},
'loggers': {'django.request':{'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True,},},
}
Any help or hint appreciated! :-) Merry Christmas!
Perhaps you are running several instances of the same application in the production environment (I guess the limit will work per instance).
In my experience, a log aggregation tool like sentry is so much better than receiving errors by email, that it is not worth chasing the cause of your problem. Just install sentry and be happy.