My Django Logger works fine on my local machine but does not work when I deploy it to the server.
Local: OS X 10.9.5, Python 2.7.5, Django 1.6.2
Server: Ubuntu 12.04, Apache 2.2.22, mod_wsgi Version: 3.3-4ubuntu0.1, Python 2.7.3, Django 1.6
On my local setup I run python manage.py syncdb, this creates the survey.log file which I then follow with tail -f survey.log so I can see the error messages as they are created.
On my server I run python manage.py syncdb, this creates the survey.log file which I then follow with tail -f survey.log. However I can not see any of my debug messages and when I inspect the file with nano it is empty.
Why is no logging data being recorded into survey.log on my production environment? What am I missing?
views.py
import logging
logger = logging.getLogger(__name__)
logger.debug('This is your images list in 7: %s', images)
settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'applogfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
#'filename': os.path.join(DJANGO_ROOT, 'survey.log'),
'filename': 'survey.log',
'maxBytes': 1024*1024*15, # 15MB
'backupCount': 10,
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'survey': {
'handlers': ['applogfile',],
'level': 'DEBUG',
},
}
}
EDIT
I have now discovered that my Django error logs are getting written to my Apache error logs. Is this in any way normal?
Running
sudo tail /var/log/apache2/error.log
Provides me with the expected print out that I should be getting in my above Django error log file. e.g.
[15/Dec/2014 21:36:07] DEBUG [survey:190] This is your images list in 7: ['P3D3.jpg', 'P1D1.jpg', 'P5D5.jpg']
You aren't using the correct logger in your views.py. Try this:
import logging
logger = logging.getLogger('survey')
logger.debug('This is a debug message.')
The logger you get must match the loggers defined in LOGGING.
My guess is that the logfile actually gets created but in a directory that you don't except it to be. Try to put a full path in your configuration. Something like 'filename': '/tmp/survey.log',, restart apache and check if it's there. Of course first you must apply a solution that Derek posted.
The reason you see the log message in your apache logs is probably because mod_wsgi configures somehow a default (root) logger and your configuration doesn't disable it (you have 'disable_existing_loggers': False,).
if you don't pass the full path of the filename, it is created in the directory the running process is started ( or make a sys/chdir), it depends on the operating system which dir is used
Related
I am trying to make Django create and rotate new logs every 10 minutes using TimedRotatingFileHandler. My settings is as follows:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'DEBUG',
'class':'logging.handlers.TimedRotatingFileHandler',
'filename': 'logs/site.log',
'backupCount': 10,
'when':'m',
'interval':10,
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
},
}
The first log file is created successfully. But when it is time to rotate the log file, I get the following error:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'F:\\logs\\site.log' -> 'F:\\logs\\site.log.2021-05-22_19-18'
How do I configure the logging so that the current log is copied to a timed log and new data written to the main log file?
Unfortunately, this file handler is not suitable for Windows, as any opened file cannot be moved or renamed. As it may work fine under some circumstances when using ./manage.py runserver, it will fail when trying to use it wit any production-ready wsgi or asgi server as those servers are spawning multiple processes that are not aware of other ones having the log file open.
Answering my own question. The solution that seems to work for me is including the flag --noreload when running the server.
This allows the logs to be rotated correctly although I don't know why.
I launch my Django 1.8 app with UWSGI in 10 processes. UWSGI is set up under the virtualenv.
Django file logging config is the following:
LOG_FILE_PATH = '/tmp/app_logs/debug.log'
...
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'file': {
'level': 'INFO',
'class': 'logging.handlers.TimedRotatingFileHandler',
'filename': LOG_FILE_PATH,
'formatter': 'verbose',
'when': 'midnight',
'interval': 1,
'backupCount': 0,
...
When I start UWSGI logging works just fine - I see debug.log being updated with entries. As well as I see the activity in UWSGI log file:
/var/log/uwsgi/mysite.log
After the midnight, I see the Django log files rotation happened (debug.log.2015-09-30 is indeed created), but it is almost empty:
$ cat debug.log.2015-09-30
INFO 2015-10-01 17:45:21,362 MainScreen 1836 140697212401600 MainScreen is called with the following parameters: {}
ERROR 2015-10-01 17:45:21,362 MainScreen 1836 140697212401600 Login error: NotEnoughParametersError {}
Also, the current log file debug.log is not being updated anymore with the app activity. And so does UWSGI log file:
$ tail -f /var/log/uwsgi/mysite.log
remains silent while the app is up and running. If I restart UWSGI everything gets back to normal until the next midnight.
I suspect this might be a concurrency issue with Django logging. How do I overcome that? And how do I fix UWSGI logs too?
For Django log I resolved the issue by substituting TimedRotatingFileHandler with ConcurrentLogHandler.
I have the following config (based on this):
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
},
'root': {
'handlers': ['console'],
'level': 'INFO',
},
},
When I run my tests like this (Django 1.8.4)
./manage.py test
I get DEBUG-level output from a source line in my own code that looks like
import logging
logging.debug("Shouldn't be seen, but is")
The line indicates the log message is going to the root logger, as I would expect:
DEBUG:root:blah: Shouldn't be seen, but is
As the tests are running it says
nosetests --verbosity=1
If I say
./manage.py test --verbosity=0
that nosetests message goes away, but the debug logging does not.
What is happening? Is my logging config wrong? Is nosetests interfering? Django?
I think my logging config is being read. I suppressed a django.request WARNING by configuring that logger in this config file.
How do I debug this?
(I read this related post, it didn't help.)
Your logger configuration is not written correctly. The 'root' configuration goes outside the 'loggers' part, like this:
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
},
},
'root': {
'handlers': ['console'],
'level': 'INFO',
}
Setting it up as this should work.
I am trying to configure my logging configuration in settings.py and there are so many options, I'm having trouble replicating the built-in development server log (that prints to console).
I want my production log to record the same information that would normally be printed to console in the development server log (GET requests, debug info, etc). I either need to know which settings I need to change below, or the location of the settings for the built-in development server log, so that I can copy that.
LOGGING = {
'version': 1,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/home/django/django_log.log',
'formatter': 'simple'
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
}
}
if DEBUG:
# make all loggers use the console.
for logger in LOGGING['loggers']:
LOGGING['loggers'][logger]['handlers'] = ['console']
I also do not want to have to add any code anywhere else but my settings.py if at all possible. I don't want to have to go into my views.py and specify what errors to print or log, I never had to do that with the development server, so I'm hoping I can figure this out.
In Django 1.8, the default logging configuration for a debug environment is:
When DEBUG is True:
The django catch-all logger sends all messages at the WARNING level or higher to the console. Django doesn’t make any such logging calls at this time (all logging is at the DEBUG level or handled by the django.request and django.security loggers).
The py.warnings logger, which handles messages from warnings.warn(), sends messages to the console.
This logging configuration can be found at django.utils.log.DEFAULT_LOGGING. Note that the catch-all logger actually gets info messages as well, not just warning and above.
When overriding the default logging settings, note that disable_existing_loggers, if set to True, will shut up all of Django's default loggers.
The development server logs every incoming request directly to stderr like this:
[18/Oct/2015 12:08:17] "GET /about/ HTTP/1.1" 200 9946
This is specific to the development server and will not be carried over to a production environment, unless you replicate it with middleware.
I'm using Django 1.4.1.
There's a script tmp.py in the same direcotry as manage.py, it does some routime operatons and is started from cron with command like python /path/to/project/tmp.py
Here is tmp.py:
import os
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "MYPROJECT.settings")
if __name__ == "__main__":
import logging
logger = logging.getLogger('parser.test')
logger.info('Hello, world')
This script makes a log entry, but this entry is just ignored by logger.
When I put same 3 lines into any of my views, log entry appears in my log file as expected.
Logger config from setings.py:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'standard': {
'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt' : "%d/%b/%Y %H:%M:%S"
},
},
'handlers': {
'null': {
'level':'DEBUG',
'class':'django.utils.log.NullHandler',
},
'logfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
'filename': "/hosting/MYPROJECTS/logs/logfile",
'maxBytes': 50000,
'backupCount': 2,
'formatter': 'standard',
},
'console':{
'level':'DEBUG',
'class':'logging.StreamHandler',
'formatter': 'standard'
},
},
'loggers': {
'django': {
'handlers':['console'],
'propagate': True,
'level':'WARN',
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'parser': {
'handlers': ['console', 'logfile'],
'level': 'DEBUG',
},
}
}
Why this logger works only in views? Maybe logger is still not configured after os.environ.setdefault("DJANGO_SETTINGS_MODULE", "MYPROJECT.settings") in my tmp.py?
I can't solve this problem but I found alternative solution: write a custom admin command instead of separate script near manage.py
https://docs.djangoproject.com/en/dev/howto/custom-management-commands/
It logs works fine in admin commands.
I managed to get Django logging settings to work for my non-Django script.
First, let's orient to the Django directory: <myproject>/<src>/<myproject>/settings.py & <myproject>/<src>/manage.py
# FILE: <myproject>/<src>/scripts/save_report_2_mongo.py
[...]
import logging
logger = logging.getLogger(__name__)
[...]
At the top of my script I import logging. Then I create the logging object. In my case __name__ == scripts.save_report_2_mongo. If the OP's setup is anywhere near to mine, then by this example name != "main", and the logger is never instantiated. Right?
Finally, inside settings.py
# FILE: <myproject>/<src>/<myproject>/settings.py
[...]
LOGGING = {
[...]
'formatters' : {...},
'handlers' : {...},
'loggers': {
'django': {
'handlers': ['console', 'fileInfo', 'fileDebug'],
'level': 'DEBUG',
'propagate': True,
},
'scripts.save_report_2_mongo': {
'handlers': ['sr2m_handler'],
'level': 'DEBUG',
},
}
I believe this works because of this passage in Python docs: docs.python.org > Logging HOWTO > Advanced Logging Tutorial
Advanced Logging Tutorial
The logging library takes a modular approach and offers several
categories of components: loggers, handlers, filters, and formatters.
Loggers expose the interface that application code directly uses.
[...]
Logging is performed by calling methods on instances of the Logger
class (hereafter called loggers). Each instance has a name, and they
are conceptually arranged in a namespace hierarchy using dots
(periods) as separators. For example, a logger named ‘scan’ is the
parent of loggers ‘scan.text’, ‘scan.html’ and ‘scan.pdf’. Logger
names can be anything you want, and indicate the area of an
application in which a logged message originates.
A good convention to use when naming loggers is to use a module-level
logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
This means that logger names track the package/module hierarchy, and
it’s intuitively obvious where events are logged just from the logger
name.
My emphasis--logger names track the package/module hierarchy. In this case I obtained the value of __name__ inside my script and then named the logger in the loggers section of settings.py.