I have Django-elastic APM setup, that's sends traces and logs to elk stack. Its actually works, but not as I need. I get trace, I get metadata, even logs received (2nd pic)
But, problem is, I don't get any messages in logs section and I didn't find how to customize fields.
But! When I search directly in logs, I see following: Message exist
Finally, when I search it discover section, I can see even more info. Fields, I actually need.
QUESTION
So, there is my questions. Is it possible to add at least message info to transaction logs (first pic), Is it possible to at least add custom fields to logs section (2nd pic) Also, is there a way to make logs at least clickable? (Also 2nd pic, I mean its just plain text I have to go to discover and use this info like ctrl+c ctrl+v)
Finally, why logs are marked as Errors, if its just a logs, and used like logs? I tried to set different levels as debug, or info, as u see in 2nd screen, but it still comes like error and it goes in apm-7.14-error* index.
Here's my logging settings:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
'format': 'velname)s %(message)s'
},
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'elasticapm': {
'level': 'DEBUG',
'class': 'elasticapm.contrib.django.handlers.LoggingHandler',
},
},
'loggers': {
'meditations': {
'handlers': ['elasticapm'],
'level': 'DEBUG',
'propagate': False,
},
}
}
And that's how I send logs:
logger = logging.getLogger('meditations')
logger.info(
'info',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
logger.warning(
'log',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
logger.debug(
'debug',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
I figured it out myself. Elastic logs don't provide debug-info-warn logs, its used only to pass high-level (error-critical) logs and doesn't hold message. If u came here, u should use Django logging and send it to elastic via logstash or filebeat. I used logstash.
Related
Below is the logging format I have for a DRF application in azure app service. I tried using Timed Rotating File handler but I was not able to save the logs with that option. Also, when ever the app service restarts, the previous logs are getting erased. Is there a way to maintain day wise logs in azure app service.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt' : "%d/%b/%Y %H:%M:%S"
},
},
'filters':{
'error_mails': {
'()': 'django.utils.log.CallbackFilter',
'callback':'app.log.CustomExceptionReporter'
},
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse',
},
},
'handlers': {
'logfile': {
'level':'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': f'{BASE_DIR}/app_log.log',
'maxBytes': 9000000,
'backupCount': 10,
'formatter': 'standard'
},
'console':{
'level':'INFO',
'class':'logging.StreamHandler',
'formatter': 'standard'
},
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html' :True,
'reporter_class':'app.log.CustomExceptionReporter',
# 'filters':['require_debug_false',]
},
},
'loggers': {
'django': {
'handlers':['logfile','mail_admins'],
'propagate': True,
'level':'ERROR',
},
'django.request':{
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'app': {
'handlers': ['logfile','console'],
'level': 'INFO',
'propogate': True,
}
}
}
Is there a way to maintain day-wise logs in the azure app service.
Currently, it is not possible to save a log day-wise. By default, App Service the log will be saved in LogFiles\Application\. And we were not able to change the log file name also.
Application logging
File System - Save the logs into the filesystem. This setting will only stay enabled for 12 hours
Azure Blob Storage- You can also save the logs to Azure Blob Storage, even if you also save the logs to the file system(This feature is not available for Linux App service)
You can also define a level of verbosity for the application logging to catch. This allows you to filter logging data into Error, Warning, Information, or Verbose categories. All information that you log will be caught by the Verbose value. Refer Azure App Service Log files.
whenever the app service restarts, the previous logs are getting erased
You have to take a backup of your App so that we can get all information (Configuration, log files, App details) about App Service. Refer App Service Backup
I'm trying to configure django logging in the django settings file so that it logs django info and info for my application to a custom file for easy viewing. Here's my logging config:
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
'file': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
},
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'formatter': 'file',
'filename': 'logs/django_log.log',
'backupCount': 10, # keep at most 10 log files
'maxBytes': 5242880, # 5*1024*1024 bytes (5MB)
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'console',
},
},
'loggers': {
'django': {
'handlers': ['file', 'console'],
'level': 'INFO',
'propagate': True,
},
'py.warnings': {
'handlers': ['console'],
},
'my_application': {
'level': 'INFO',
'handlers': ['file', 'console'],
# required to avoid double logging with root logger
'propagate': False,
},
},
}
This works on my local manage.py test server, both with django events appearing and events that I log, initialized with my_application as the logger name. However, on my web server, the logging file is created and, oddly, only populated with occasional django WARNING messages. So, there is no permissions error or inability to access the log file. Since the same config works on my local, the config can't be the issue, and it clearly has only INFO level logs.
My server setup is taken from this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04 and uses Gunicorn with Nginx as a reverse proxy. Could the issue be with the configs for these? I'm stumped.
Additionally, where's a good best practice place to store this django log file?
Also, one related bonus question: What's a good best practice free/cheap service that can notify you if a specific error is logged? It seems like a good idea to set something like that up, but I don't think the django emailer is necessarily the most elegant or best.
I am trying to use google_cloud_logging with Django to log JSON logs to Stackdriver. However, the received format on Stackdriver is not as I would expect.
My settings.py LOGGING setup looks like:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client,
},
},
'loggers': {
'': {
'handlers': ['console', 'stackdriver_logging'],
'level': 'INFO',
'propagate': True,
},
'django': {
'handlers': ['console', 'stackdriver_logging'],
'level': 'INFO',
'propagate': True
}
}
and I write dictionary logs like:
_logger.info({'message_type':'profile', 'runtime': 500})
I would expect on Stackdriver for messages to appear as:
{
...
jsonPayload:{
'message_type':'profile',
'runtime': 500
}
}
However they appear with the following format:
{
...
jsonPayload:{
'message': "{'message_type':'profile','runtime': 500}"
}
}
where instead of the jsonPayload being directly the sent dictionary, it is string encoded in 'message'. It's unclear what I should change in order to have the desired format on Stackdriver. Could anyone point me in the right direction?
If you are looking to format the logs you ingest into Stackdriver then you have to use Structured Logging.
If you use the Stackdriver Logging API or the command-line utility, gcloud logging, you can control the structure of your payloads. If you use the Stackdriver Logging agent to get your log entries, you can specify that the Logging agent convert your payloads to JSON format. See LogEntry for more information about different payload formats.
To enable structured logging, you must change the default configuration of the Logging agent when installing or reinstalling it.
If your logs are not supported by the standard parsers, you can write your own. Parsers consist of a regular expression that is used to match log records and apply labels to the pieces. See some examples here
I am trying to configure my logging configuration in settings.py and there are so many options, I'm having trouble replicating the built-in development server log (that prints to console).
I want my production log to record the same information that would normally be printed to console in the development server log (GET requests, debug info, etc). I either need to know which settings I need to change below, or the location of the settings for the built-in development server log, so that I can copy that.
LOGGING = {
'version': 1,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/home/django/django_log.log',
'formatter': 'simple'
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
}
}
if DEBUG:
# make all loggers use the console.
for logger in LOGGING['loggers']:
LOGGING['loggers'][logger]['handlers'] = ['console']
I also do not want to have to add any code anywhere else but my settings.py if at all possible. I don't want to have to go into my views.py and specify what errors to print or log, I never had to do that with the development server, so I'm hoping I can figure this out.
In Django 1.8, the default logging configuration for a debug environment is:
When DEBUG is True:
The django catch-all logger sends all messages at the WARNING level or higher to the console. Django doesn’t make any such logging calls at this time (all logging is at the DEBUG level or handled by the django.request and django.security loggers).
The py.warnings logger, which handles messages from warnings.warn(), sends messages to the console.
This logging configuration can be found at django.utils.log.DEFAULT_LOGGING. Note that the catch-all logger actually gets info messages as well, not just warning and above.
When overriding the default logging settings, note that disable_existing_loggers, if set to True, will shut up all of Django's default loggers.
The development server logs every incoming request directly to stderr like this:
[18/Oct/2015 12:08:17] "GET /about/ HTTP/1.1" 200 9946
This is specific to the development server and will not be carried over to a production environment, unless you replicate it with middleware.
We have a website with several thousand requests per minute and I'd like to limit the maximum number of error admin mails that Django sends us, in case we have an error in our code.
On my local dev system, the following logging configuration works nicely. It sets a cache value, which expires after 10 seconds. An error that occurs within this time interval will not be reported at all. That's what we want. All of the following code is placed inside our project settings file.
def limit_error_interval(record):
from django.core.cache import cache
if not cache.get('last_error'):
cache.set('last_error', '1', 10) # 10 sec
return True
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'error_limit': {
'()': 'django.utils.log.CallbackFilter',
'callback': limit_error_interval
},
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['error_limit',],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins',],
'level': 'ERROR',
'propagate': True
},
}
}
However, the code fails on our production server (NGINX). limit_error_interval gets called, but every error is still sent by mail. Our cache works within the code, but it appears it's not working inside this function. Is there a problem using cache inside the settings?
This alternative approach/snippet has exactly the same problem: works on local dev system, but not in production.
class ErrorIntervalFilter(object):
def filter(self, record):
from django.core.cache import cache
if not cache.get('last_error'):
cache.set('last_error', '1', 10)
return True
LOGGING = {
'version': 1, 'disable_existing_loggers': False,
'handlers': {'mail_admins': {'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'filters': ['error_interval'],}},
'filters': {'error_interval': {'()': ErrorIntervalFilter,}},
'loggers': {'django.request':{'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True,},},
}
Any help or hint appreciated! :-) Merry Christmas!
Perhaps you are running several instances of the same application in the production environment (I guess the limit will work per instance).
In my experience, a log aggregation tool like sentry is so much better than receiving errors by email, that it is not worth chasing the cause of your problem. Just install sentry and be happy.