I have a Django 1.3 application. I have several management commands. I want each command to write, to a different logfile, instead of default filename defined in settings.py. I run these commands as part of cron everyday. Our current logfile looks like the example given here https://docs.djangoproject.com/en/dev/topics/logging/. And, we use
logger = logging.getLogger(__name__)
Thanks
You'll need to do this by adding a logger and a handler for each package:
'handlers': {
'my_command_handler': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/path/to/my_command_log',
},
...
},
'loggers': {
'my_pkg.management.commands.my_command': {
'level': 'DEBUG',
'handlers': ['my_command_handler'],
},
...
}
You may also want to consider adding 'propagate': False to the command loggers, if you don't want the messages to get to other loggers.
Related
I have encountered a strange behavior of Django Loggers.
I am developing a front end application using Django. During the login service, I make some requests to certain components and use log.warning() calls to see the flow of the requests.
The logs worked perfectly, until I decided to add a LOGGING configuration to print the output of the logs in a file, as I want to deploy the application via Docker and I want to periodically check the log files.
When I added the following Django configuration concerning logging:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'detailed': {
'class': 'logging.Formatter',
'format': "[%(asctime)s] - [%(name)s:%(lineno)s] - [%(levelname)s] %(message)s",
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'INFO',
'formatter': 'detailed',
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': "{}/am.log".format(BASE_DIR),
'mode': 'w',
'formatter': 'detailed',
'level': 'INFO',
'maxBytes': 2024 * 2024,
'backupCount': 5,
},
},
'loggers': {
'am': {
'level': 'INFO',
'handlers': ['console', 'file']
},
}
}
The logging stops working. The file specified in the logging configuration, am.log, is indeed created but nothing gets printed to this file. Even the console logging does not take place.
I have taken this logging configuration from one of my Django projects for the backend of this application, and there it works perfectly. I really don't understand what I am doing wrong. Could you please help me or guide me in the right direction. I would be very grateful.
I wish you all a good day!
By using the key "am" in your 'loggers' configuration, you're defining one logger with name "am":
'loggers': {
'am': { # <-- name of the logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
So to use that logger, you have to get it by that name:
logger = logging.getLogger("am")
logger.warning("This is a warning")
If you name your loggers by the name of the module in which you're running, which is recommended practice, then you need to define each module logger:
logger = logging.getLogger(__name__) # <-- this logger will be named after the module, e.g. your app name.
Then in your logging configuration you can specify logging behavior per module (per app):
'loggers': {
'my_app': { # <-- logging for my app
'level': 'INFO',
'handlers': ['console', 'file']
},
'django': { # <-- logging for Django module
'level': 'WARNING',
'handlers': ['console', 'file']
},
}
Or if you just want to log everything the same, use the root ('') logger, which doesn't have a name, just empty string:
'loggers': {
'': { # <-- root logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
I was just wondering what is the current best way to perform logging within Django and AWS.
I was looking at this article, which suggested writing to the following location, but I have found that this doesn't work:
/opt/python/log
Then use commands such as
command: chmod g+s /opt/python/log
command: chown root:wsgi /opt/python/log
I have also seen articles that suggest using watchtower, but I don't like the idea of adding my secret access keys into the code:
https://medium.com/#zoejoyuliao/plug-your-django-application-logging-directly-into-aws-cloudwatch-d2ec67898c0b
What is the current and best way to do this ?
Thanks
You can customise your logging in settings.py file
Add this to your loggers section.
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'APPNAME': {
'handlers': ['applogfile',],
'level': 'DEBUG',
},
}
add this to handlers section you can specify filename path.
'applogfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'filename': os.path.join(DJANGO_ROOT, 'APPNAME.log'),
'maxBytes': 1024*1024*15, # 15MB
'backupCount': 10,
},
With this call you are using your customised logger.
logger = logging.getLogger('APPNAME')
I want to keep the convention of naming loggers for their module with logger = logging.getLogger(__name__), but have a single logger that captures them all.
Is that possible, and what should the logger's name be?
(I do not want to switch everything to logger = logging.getLogger('django'))
Yup, the default logger is the root logger '', you can get it via:
logging.getLogger()
logging.getLogger('')
Note the different type:
>>> logging.getLogger('')
<logging.RootLogger object at 0x7f0b9521ec18>
in Django you can add handlers to the root logger easily:
LOGGING = {
...
'loggers': {
'': {
'level': 'DEBUG',
'handlers': ['system_log', 'debug_log', 'sentry'],
},
},
}
If you attach a handler to the root logger, it will see ALL the logs. This is not always wanted. So if you attach some handlers to a more specialised logger, you can then set the other one to propagate=False so that the message doesn't reach the root logger.
'loggers': {
'': {
'level': 'DEBUG',
'handlers': ['system_log', 'debug_log', 'sentry'],
},
'django': {
'level': 'INFO',
'handlers': ['another_handler_just_for_django'],
'propagate': False,
},
},
I'm using Django 1.6 and would like to add to—not replace—Django's default logging. Specifically, I want to make it so that in addition to Django's default logging behavior, any log record with a log level of DEBUG or higher gets written to a log file that gets automatically rotated.
Based on what I've found through searches and my own experimenting, it seems like you have to redefine all of the logging (Django's defaults plus my rotating file logging), even when using 'disable_existing_loggers': False. I figure I'm probably doing something wrong.
I'd like to see an example of what to put in my settings.py file that will allow me to accomplish this.
settings.py is just Python, so we can do:
from django.utils.log import DEFAULT_LOGGING
# Use defaults as the basis for our logging setup
LOGGING = DEFAULT_LOGGING
# We need some formatters. These ones are from the docs.
LOGGING['formatters'] = {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
}
# A log file handler that rotates periodically
LOGGING['handlers'].update({
'rotate': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'formatter': 'verbose',
'filename': '/tmp/debug.log',
}
})
# The default 'django' logger is a catch-all that does nothing. We replace it with
# a rotating file handler.
LOGGING['loggers'].update({
'django': {
'handlers': ['rotate'],
'propagate': True,
'level': 'DEBUG',
}
})
# If you don't want to completely replace the django handler, you could do something
# like this instead:
#LOGGING['loggers']['django']['handlers'] += ['rotate']
This will add your rotating file handler to the existing handlers, define the basic formatters, and replace the catch-all logger (which does nothing) with one that replace the default logger
Based on your comments I tried this. It seems to do what you're asking.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'rotate': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': '/tmp/debug.log',
}
},
'loggers': {
'django': {
'handlers': ['rotate'],
'propagate': True,
'level': 'DEBUG',
}
},
}
By default I can enable logging in settings.py in the LOGGING configuration by creating a logger "" which will catch all logs. But what if I only want to see logging from my project's apps as opposed to Django internals?
I can imagine explicitly getting a logger in each of my Django app modules and naming it by some convention, e.g. logging.getLogger("myproject." + __file__). Then I can create a logger called 'myproject' (in SETTINGS) which picks up all of these to output. I'd prefer not to hardcode my project name, so I'd do some os.path logic on ___file___ to extract the full namespace up to the file at any arbitrary depth.
At this point, I stop and wonder is there an easier way?
You could use a scheme like the following to create identical loggers for all of your local apps without having to manually add them all to the logging configuration.
First, split out your local apps:
LOCAL_APPS = [
'myapp1',
'myapp2',
...
]
THIRD_PARTY_APPS = [
'django. ...',
...
]
INSTALLED_APPS = LOCAL_APPS + THIRD_PARTY_APPS
Next create the logger configuration for your local apps:
local_logger_conf = {
'handlers': ['my_handler',],
'level': 'DEBUG',
}
Finally, define your loggers as follows:
'loggers': { app: copy.deepcopy(local_logger_conf) for app in LOCAL_APPS }
Not sure if I fully understood your question, because the answer seems too simple.
Assuming you have defined in LOGGING a handler for your project's apps, for example like this:
'handlers': {
'handler_for_my_apps': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': 'debug.log',
},
and given your apps app1, app2, and so, you could have all the logs from those apps without any Django's internal logs by defining the loggers:
'loggers': {
'app1': {
'handlers': ['handler_for_my_apps'],
'level': 'DEBUG',
},
'app2': {
'handlers': ['handler_for_my_apps'],
'level': 'DEBUG',
},
There will be no Django logs in the same file, unless of course you defined a logger named django with a handler handler_for_my_apps.
In your apps you can get the logger using logging.getLogger(__name__) as recommended by the docs.
Unless I misunderstood your question...
Thank you for sharing, reading this post helped solve logging for my project. I'll share the solution I opted for, hoping it can help other people.
Define a list of Dictionaries:
LOCAL_APPS = [
{'app_name': 'app1', 'level': 'INFO'},
{'app_name': 'app2', 'level': 'DEBUG'},
]
Now create a function to modify settings' LOGGING:
def create_app_logger(app_name, level):
app_handler = app_name + '_handler'
LOGGING['handlers'][app_handler] = {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': f'../logs/{app_name}_logs.log',
'formatter': 'custom',
}
LOGGING['loggers'][app_name] = {
'handlers': ['console', app_handler],
'level': level,
'propagate': False,
}
Finally loop through the list:
for dictionary in LOCAL_APPS:
create_app_logger(dictionary['app_name'], dictionary['level'])
Since an app can be a world on it's own, this way you'll have a log file for each app, plus you have control over the logging level you want. It could be further personalized of course.
Cheers