I am trying to use google_cloud_logging with Django to log JSON logs to Stackdriver. However, the received format on Stackdriver is not as I would expect.
My settings.py LOGGING setup looks like:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
'stackdriver_logging': {
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': log_client,
},
},
'loggers': {
'': {
'handlers': ['console', 'stackdriver_logging'],
'level': 'INFO',
'propagate': True,
},
'django': {
'handlers': ['console', 'stackdriver_logging'],
'level': 'INFO',
'propagate': True
}
}
and I write dictionary logs like:
_logger.info({'message_type':'profile', 'runtime': 500})
I would expect on Stackdriver for messages to appear as:
{
...
jsonPayload:{
'message_type':'profile',
'runtime': 500
}
}
However they appear with the following format:
{
...
jsonPayload:{
'message': "{'message_type':'profile','runtime': 500}"
}
}
where instead of the jsonPayload being directly the sent dictionary, it is string encoded in 'message'. It's unclear what I should change in order to have the desired format on Stackdriver. Could anyone point me in the right direction?
If you are looking to format the logs you ingest into Stackdriver then you have to use Structured Logging.
If you use the Stackdriver Logging API or the command-line utility, gcloud logging, you can control the structure of your payloads. If you use the Stackdriver Logging agent to get your log entries, you can specify that the Logging agent convert your payloads to JSON format. See LogEntry for more information about different payload formats.
To enable structured logging, you must change the default configuration of the Logging agent when installing or reinstalling it.
If your logs are not supported by the standard parsers, you can write your own. Parsers consist of a regular expression that is used to match log records and apply labels to the pieces. See some examples here
Related
Below is the logging format I have for a DRF application in azure app service. I tried using Timed Rotating File handler but I was not able to save the logs with that option. Also, when ever the app service restarts, the previous logs are getting erased. Is there a way to maintain day wise logs in azure app service.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt' : "%d/%b/%Y %H:%M:%S"
},
},
'filters':{
'error_mails': {
'()': 'django.utils.log.CallbackFilter',
'callback':'app.log.CustomExceptionReporter'
},
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse',
},
},
'handlers': {
'logfile': {
'level':'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': f'{BASE_DIR}/app_log.log',
'maxBytes': 9000000,
'backupCount': 10,
'formatter': 'standard'
},
'console':{
'level':'INFO',
'class':'logging.StreamHandler',
'formatter': 'standard'
},
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html' :True,
'reporter_class':'app.log.CustomExceptionReporter',
# 'filters':['require_debug_false',]
},
},
'loggers': {
'django': {
'handlers':['logfile','mail_admins'],
'propagate': True,
'level':'ERROR',
},
'django.request':{
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'app': {
'handlers': ['logfile','console'],
'level': 'INFO',
'propogate': True,
}
}
}
Is there a way to maintain day-wise logs in the azure app service.
Currently, it is not possible to save a log day-wise. By default, App Service the log will be saved in LogFiles\Application\. And we were not able to change the log file name also.
Application logging
File System - Save the logs into the filesystem. This setting will only stay enabled for 12 hours
Azure Blob Storage- You can also save the logs to Azure Blob Storage, even if you also save the logs to the file system(This feature is not available for Linux App service)
You can also define a level of verbosity for the application logging to catch. This allows you to filter logging data into Error, Warning, Information, or Verbose categories. All information that you log will be caught by the Verbose value. Refer Azure App Service Log files.
whenever the app service restarts, the previous logs are getting erased
You have to take a backup of your App so that we can get all information (Configuration, log files, App details) about App Service. Refer App Service Backup
I have Django-elastic APM setup, that's sends traces and logs to elk stack. Its actually works, but not as I need. I get trace, I get metadata, even logs received (2nd pic)
But, problem is, I don't get any messages in logs section and I didn't find how to customize fields.
But! When I search directly in logs, I see following: Message exist
Finally, when I search it discover section, I can see even more info. Fields, I actually need.
QUESTION
So, there is my questions. Is it possible to add at least message info to transaction logs (first pic), Is it possible to at least add custom fields to logs section (2nd pic) Also, is there a way to make logs at least clickable? (Also 2nd pic, I mean its just plain text I have to go to discover and use this info like ctrl+c ctrl+v)
Finally, why logs are marked as Errors, if its just a logs, and used like logs? I tried to set different levels as debug, or info, as u see in 2nd screen, but it still comes like error and it goes in apm-7.14-error* index.
Here's my logging settings:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
'format': 'velname)s %(message)s'
},
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'elasticapm': {
'level': 'DEBUG',
'class': 'elasticapm.contrib.django.handlers.LoggingHandler',
},
},
'loggers': {
'meditations': {
'handlers': ['elasticapm'],
'level': 'DEBUG',
'propagate': False,
},
}
}
And that's how I send logs:
logger = logging.getLogger('meditations')
logger.info(
'info',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
logger.warning(
'log',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
logger.debug(
'debug',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
I figured it out myself. Elastic logs don't provide debug-info-warn logs, its used only to pass high-level (error-critical) logs and doesn't hold message. If u came here, u should use Django logging and send it to elastic via logstash or filebeat. I used logstash.
I am looking at a way to remove 4xx errors from my newrelic error rate reporting for a Django project.
It's a standard installation of new relic and Django framework.
Any help appreciated for same.
You should be able to update your new relic config file to ignore these errors.
https://docs.newrelic.com/docs/agents/python-agent/configuration/python-agent-configuration#error-collector-settings
Use a filter in your logging settings that changes the level of the 4xx errors. I've done this for 404 errors:
def change_404_level_to_INFO(record):
if record.status_code == 404:
record.levelname = 'INFO'
return True
LOGGING = {
...,
'filters': {
'change_404_to_info': {
'()': 'django.utils.log.CallbackFilter',
'callback': change_404_level_to_INFO,
},
},
'handlers': {
... # your newrelic handler
},
'loggers': {
# Root logger"
'django': {
'handlers': ['newrelic', 'mail_admins'],
'level': 'WARNING',
'propagate': False,
'filters': ['change_404_to_info']
},
}
I am trying to configure my logging configuration in settings.py and there are so many options, I'm having trouble replicating the built-in development server log (that prints to console).
I want my production log to record the same information that would normally be printed to console in the development server log (GET requests, debug info, etc). I either need to know which settings I need to change below, or the location of the settings for the built-in development server log, so that I can copy that.
LOGGING = {
'version': 1,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/home/django/django_log.log',
'formatter': 'simple'
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
}
}
if DEBUG:
# make all loggers use the console.
for logger in LOGGING['loggers']:
LOGGING['loggers'][logger]['handlers'] = ['console']
I also do not want to have to add any code anywhere else but my settings.py if at all possible. I don't want to have to go into my views.py and specify what errors to print or log, I never had to do that with the development server, so I'm hoping I can figure this out.
In Django 1.8, the default logging configuration for a debug environment is:
When DEBUG is True:
The django catch-all logger sends all messages at the WARNING level or higher to the console. Django doesn’t make any such logging calls at this time (all logging is at the DEBUG level or handled by the django.request and django.security loggers).
The py.warnings logger, which handles messages from warnings.warn(), sends messages to the console.
This logging configuration can be found at django.utils.log.DEFAULT_LOGGING. Note that the catch-all logger actually gets info messages as well, not just warning and above.
When overriding the default logging settings, note that disable_existing_loggers, if set to True, will shut up all of Django's default loggers.
The development server logs every incoming request directly to stderr like this:
[18/Oct/2015 12:08:17] "GET /about/ HTTP/1.1" 200 9946
This is specific to the development server and will not be carried over to a production environment, unless you replicate it with middleware.
I have a Django 1.3 application. I have several management commands. I want each command to write, to a different logfile, instead of default filename defined in settings.py. I run these commands as part of cron everyday. Our current logfile looks like the example given here https://docs.djangoproject.com/en/dev/topics/logging/. And, we use
logger = logging.getLogger(__name__)
Thanks
You'll need to do this by adding a logger and a handler for each package:
'handlers': {
'my_command_handler': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/path/to/my_command_log',
},
...
},
'loggers': {
'my_pkg.management.commands.my_command': {
'level': 'DEBUG',
'handlers': ['my_command_handler'],
},
...
}
You may also want to consider adding 'propagate': False to the command loggers, if you don't want the messages to get to other loggers.