I was just wondering what is the current best way to perform logging within Django and AWS.
I was looking at this article, which suggested writing to the following location, but I have found that this doesn't work:
/opt/python/log
Then use commands such as
command: chmod g+s /opt/python/log
command: chown root:wsgi /opt/python/log
I have also seen articles that suggest using watchtower, but I don't like the idea of adding my secret access keys into the code:
https://medium.com/#zoejoyuliao/plug-your-django-application-logging-directly-into-aws-cloudwatch-d2ec67898c0b
What is the current and best way to do this ?
Thanks
You can customise your logging in settings.py file
Add this to your loggers section.
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'APPNAME': {
'handlers': ['applogfile',],
'level': 'DEBUG',
},
}
add this to handlers section you can specify filename path.
'applogfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'filename': os.path.join(DJANGO_ROOT, 'APPNAME.log'),
'maxBytes': 1024*1024*15, # 15MB
'backupCount': 10,
},
With this call you are using your customised logger.
logger = logging.getLogger('APPNAME')
Related
If I do not include logging I have an Django application with Celery working fine on ElasticBeanstalk using AWS SQS.
When I include logging with a 'logging.FileHandler' celery gets permission denied error because it doesn't have permission rights for my log files. This is my error
ValueError: Unable to configure handler 'celery_file': [Errno 13]
Permission denied: '/opt/python/log/django.log'
This is my logging setup
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler'
},
'file': {
'level': log_level,
'class': 'logging.FileHandler',
#'filters': ['require_debug_false'],
'filename': os.environ.get('LOG_FILE_PATH', LOG_FILE_PATH + '/var/log/django.log')
},
'mail_admins': {
'level': 'ERROR',
#'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'cqc_file' :{
'level': log_level,
'class': 'logging.FileHandler',
#'filters': ['require_debug_false'],
'filename': os.environ.get('LOG_FILE_PATH', LOG_FILE_PATH + '/var/log/cqc.log')
},
'null': {
'class': 'logging.NullHandler',
},
'celery_file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': os.environ.get('LOG_FILE_PATH', LOG_FILE_PATH + '/var/log/celery.log'),
}
},
'loggers': {
'': {
'level': 'WARNING',
'handlers': ['file'],
},
'debug' : {
'level': log_level,
'handlers': ['file'],
},
'django.security.DisallowedHost': {
'handlers': ['null'],
'level' : 'CRITICAL',
'propagate': False,
},
'django.request': {
'handlers': ['file','mail_admins'],
'level': 'DEBUG',
'propagate': True,
},
'cqc_report' : {
'level' : 'INFO',
'handlers' : ['cqc_file']
},
'celery.task' : {
'handlers': ['console', 'celery_file'],
'level': 'INFO',
'propagate': True,
}
}
}
I think I need to give celery access to the django.log, celery.log and cqc.log files through and elastic beanstalk container command. I tired this using:
03_change_permissions:
command: chmod g+s /opt/python/log
04_change_owner:
command: chown root:wsgi /opt/python/log
05_add_celery_to_wsgi:
command: usermod -a -G wsgi celery
But this just gave me an error saying no user celery (or something to that effect).
How do I get the File logging to work?
The error tells you that the user who is running the process does not have the necessary privileges it needs in order to write into the log. Your solution
03_change_permissions:
command: chmod g+s /opt/python/log
04_change_owner:
command: chown root:wsgi /opt/python/log
05_add_celery_to_wsgi:
command: usermod -a -G wsgi celery
seems to be on the right track, because when there are no privileges for a user to access a location, the solution is to provide the necessary privileges. However, your assumption that the user is celery seems to be wrong. A big clue for this is that this user does not exist. So, you need to make sure that
the user to run the process exists
you identify the correct user for the process
the user has the necessary privileges for logging
I have encountered a strange behavior of Django Loggers.
I am developing a front end application using Django. During the login service, I make some requests to certain components and use log.warning() calls to see the flow of the requests.
The logs worked perfectly, until I decided to add a LOGGING configuration to print the output of the logs in a file, as I want to deploy the application via Docker and I want to periodically check the log files.
When I added the following Django configuration concerning logging:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'detailed': {
'class': 'logging.Formatter',
'format': "[%(asctime)s] - [%(name)s:%(lineno)s] - [%(levelname)s] %(message)s",
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'INFO',
'formatter': 'detailed',
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': "{}/am.log".format(BASE_DIR),
'mode': 'w',
'formatter': 'detailed',
'level': 'INFO',
'maxBytes': 2024 * 2024,
'backupCount': 5,
},
},
'loggers': {
'am': {
'level': 'INFO',
'handlers': ['console', 'file']
},
}
}
The logging stops working. The file specified in the logging configuration, am.log, is indeed created but nothing gets printed to this file. Even the console logging does not take place.
I have taken this logging configuration from one of my Django projects for the backend of this application, and there it works perfectly. I really don't understand what I am doing wrong. Could you please help me or guide me in the right direction. I would be very grateful.
I wish you all a good day!
By using the key "am" in your 'loggers' configuration, you're defining one logger with name "am":
'loggers': {
'am': { # <-- name of the logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
So to use that logger, you have to get it by that name:
logger = logging.getLogger("am")
logger.warning("This is a warning")
If you name your loggers by the name of the module in which you're running, which is recommended practice, then you need to define each module logger:
logger = logging.getLogger(__name__) # <-- this logger will be named after the module, e.g. your app name.
Then in your logging configuration you can specify logging behavior per module (per app):
'loggers': {
'my_app': { # <-- logging for my app
'level': 'INFO',
'handlers': ['console', 'file']
},
'django': { # <-- logging for Django module
'level': 'WARNING',
'handlers': ['console', 'file']
},
}
Or if you just want to log everything the same, use the root ('') logger, which doesn't have a name, just empty string:
'loggers': {
'': { # <-- root logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
I'm trying to configure django logging in the django settings file so that it logs django info and info for my application to a custom file for easy viewing. Here's my logging config:
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
'file': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
},
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'formatter': 'file',
'filename': 'logs/django_log.log',
'backupCount': 10, # keep at most 10 log files
'maxBytes': 5242880, # 5*1024*1024 bytes (5MB)
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'console',
},
},
'loggers': {
'django': {
'handlers': ['file', 'console'],
'level': 'INFO',
'propagate': True,
},
'py.warnings': {
'handlers': ['console'],
},
'my_application': {
'level': 'INFO',
'handlers': ['file', 'console'],
# required to avoid double logging with root logger
'propagate': False,
},
},
}
This works on my local manage.py test server, both with django events appearing and events that I log, initialized with my_application as the logger name. However, on my web server, the logging file is created and, oddly, only populated with occasional django WARNING messages. So, there is no permissions error or inability to access the log file. Since the same config works on my local, the config can't be the issue, and it clearly has only INFO level logs.
My server setup is taken from this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04 and uses Gunicorn with Nginx as a reverse proxy. Could the issue be with the configs for these? I'm stumped.
Additionally, where's a good best practice place to store this django log file?
Also, one related bonus question: What's a good best practice free/cheap service that can notify you if a specific error is logged? It seems like a good idea to set something like that up, but I don't think the django emailer is necessarily the most elegant or best.
I have the following config (based on this):
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
},
'root': {
'handlers': ['console'],
'level': 'INFO',
},
},
When I run my tests like this (Django 1.8.4)
./manage.py test
I get DEBUG-level output from a source line in my own code that looks like
import logging
logging.debug("Shouldn't be seen, but is")
The line indicates the log message is going to the root logger, as I would expect:
DEBUG:root:blah: Shouldn't be seen, but is
As the tests are running it says
nosetests --verbosity=1
If I say
./manage.py test --verbosity=0
that nosetests message goes away, but the debug logging does not.
What is happening? Is my logging config wrong? Is nosetests interfering? Django?
I think my logging config is being read. I suppressed a django.request WARNING by configuring that logger in this config file.
How do I debug this?
(I read this related post, it didn't help.)
Your logger configuration is not written correctly. The 'root' configuration goes outside the 'loggers' part, like this:
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
},
},
'root': {
'handlers': ['console'],
'level': 'INFO',
}
Setting it up as this should work.
I have a Django 1.3 application. I have several management commands. I want each command to write, to a different logfile, instead of default filename defined in settings.py. I run these commands as part of cron everyday. Our current logfile looks like the example given here https://docs.djangoproject.com/en/dev/topics/logging/. And, we use
logger = logging.getLogger(__name__)
Thanks
You'll need to do this by adding a logger and a handler for each package:
'handlers': {
'my_command_handler': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/path/to/my_command_log',
},
...
},
'loggers': {
'my_pkg.management.commands.my_command': {
'level': 'DEBUG',
'handlers': ['my_command_handler'],
},
...
}
You may also want to consider adding 'propagate': False to the command loggers, if you don't want the messages to get to other loggers.