I am trying to make Django create and rotate new logs every 10 minutes using TimedRotatingFileHandler. My settings is as follows:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'DEBUG',
'class':'logging.handlers.TimedRotatingFileHandler',
'filename': 'logs/site.log',
'backupCount': 10,
'when':'m',
'interval':10,
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
},
}
The first log file is created successfully. But when it is time to rotate the log file, I get the following error:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'F:\\logs\\site.log' -> 'F:\\logs\\site.log.2021-05-22_19-18'
How do I configure the logging so that the current log is copied to a timed log and new data written to the main log file?
Unfortunately, this file handler is not suitable for Windows, as any opened file cannot be moved or renamed. As it may work fine under some circumstances when using ./manage.py runserver, it will fail when trying to use it wit any production-ready wsgi or asgi server as those servers are spawning multiple processes that are not aware of other ones having the log file open.
Answering my own question. The solution that seems to work for me is including the flag --noreload when running the server.
This allows the logs to be rotated correctly although I don't know why.
Related
I have Django-elastic APM setup, that's sends traces and logs to elk stack. Its actually works, but not as I need. I get trace, I get metadata, even logs received (2nd pic)
But, problem is, I don't get any messages in logs section and I didn't find how to customize fields.
But! When I search directly in logs, I see following: Message exist
Finally, when I search it discover section, I can see even more info. Fields, I actually need.
QUESTION
So, there is my questions. Is it possible to add at least message info to transaction logs (first pic), Is it possible to at least add custom fields to logs section (2nd pic) Also, is there a way to make logs at least clickable? (Also 2nd pic, I mean its just plain text I have to go to discover and use this info like ctrl+c ctrl+v)
Finally, why logs are marked as Errors, if its just a logs, and used like logs? I tried to set different levels as debug, or info, as u see in 2nd screen, but it still comes like error and it goes in apm-7.14-error* index.
Here's my logging settings:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
'format': 'velname)s %(message)s'
},
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'elasticapm': {
'level': 'DEBUG',
'class': 'elasticapm.contrib.django.handlers.LoggingHandler',
},
},
'loggers': {
'meditations': {
'handlers': ['elasticapm'],
'level': 'DEBUG',
'propagate': False,
},
}
}
And that's how I send logs:
logger = logging.getLogger('meditations')
logger.info(
'info',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
logger.warning(
'log',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
logger.debug(
'debug',
extra={
'request.data': request.data,
'user_utc_time': request.user.fcmTime
}
)
I figured it out myself. Elastic logs don't provide debug-info-warn logs, its used only to pass high-level (error-critical) logs and doesn't hold message. If u came here, u should use Django logging and send it to elastic via logstash or filebeat. I used logstash.
I launch my Django 1.8 app with UWSGI in 10 processes. UWSGI is set up under the virtualenv.
Django file logging config is the following:
LOG_FILE_PATH = '/tmp/app_logs/debug.log'
...
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'file': {
'level': 'INFO',
'class': 'logging.handlers.TimedRotatingFileHandler',
'filename': LOG_FILE_PATH,
'formatter': 'verbose',
'when': 'midnight',
'interval': 1,
'backupCount': 0,
...
When I start UWSGI logging works just fine - I see debug.log being updated with entries. As well as I see the activity in UWSGI log file:
/var/log/uwsgi/mysite.log
After the midnight, I see the Django log files rotation happened (debug.log.2015-09-30 is indeed created), but it is almost empty:
$ cat debug.log.2015-09-30
INFO 2015-10-01 17:45:21,362 MainScreen 1836 140697212401600 MainScreen is called with the following parameters: {}
ERROR 2015-10-01 17:45:21,362 MainScreen 1836 140697212401600 Login error: NotEnoughParametersError {}
Also, the current log file debug.log is not being updated anymore with the app activity. And so does UWSGI log file:
$ tail -f /var/log/uwsgi/mysite.log
remains silent while the app is up and running. If I restart UWSGI everything gets back to normal until the next midnight.
I suspect this might be a concurrency issue with Django logging. How do I overcome that? And how do I fix UWSGI logs too?
For Django log I resolved the issue by substituting TimedRotatingFileHandler with ConcurrentLogHandler.
I am trying to configure my logging configuration in settings.py and there are so many options, I'm having trouble replicating the built-in development server log (that prints to console).
I want my production log to record the same information that would normally be printed to console in the development server log (GET requests, debug info, etc). I either need to know which settings I need to change below, or the location of the settings for the built-in development server log, so that I can copy that.
LOGGING = {
'version': 1,
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/home/django/django_log.log',
'formatter': 'simple'
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
}
}
if DEBUG:
# make all loggers use the console.
for logger in LOGGING['loggers']:
LOGGING['loggers'][logger]['handlers'] = ['console']
I also do not want to have to add any code anywhere else but my settings.py if at all possible. I don't want to have to go into my views.py and specify what errors to print or log, I never had to do that with the development server, so I'm hoping I can figure this out.
In Django 1.8, the default logging configuration for a debug environment is:
When DEBUG is True:
The django catch-all logger sends all messages at the WARNING level or higher to the console. Django doesn’t make any such logging calls at this time (all logging is at the DEBUG level or handled by the django.request and django.security loggers).
The py.warnings logger, which handles messages from warnings.warn(), sends messages to the console.
This logging configuration can be found at django.utils.log.DEFAULT_LOGGING. Note that the catch-all logger actually gets info messages as well, not just warning and above.
When overriding the default logging settings, note that disable_existing_loggers, if set to True, will shut up all of Django's default loggers.
The development server logs every incoming request directly to stderr like this:
[18/Oct/2015 12:08:17] "GET /about/ HTTP/1.1" 200 9946
This is specific to the development server and will not be carried over to a production environment, unless you replicate it with middleware.
My Django Logger works fine on my local machine but does not work when I deploy it to the server.
Local: OS X 10.9.5, Python 2.7.5, Django 1.6.2
Server: Ubuntu 12.04, Apache 2.2.22, mod_wsgi Version: 3.3-4ubuntu0.1, Python 2.7.3, Django 1.6
On my local setup I run python manage.py syncdb, this creates the survey.log file which I then follow with tail -f survey.log so I can see the error messages as they are created.
On my server I run python manage.py syncdb, this creates the survey.log file which I then follow with tail -f survey.log. However I can not see any of my debug messages and when I inspect the file with nano it is empty.
Why is no logging data being recorded into survey.log on my production environment? What am I missing?
views.py
import logging
logger = logging.getLogger(__name__)
logger.debug('This is your images list in 7: %s', images)
settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'applogfile': {
'level':'DEBUG',
'class':'logging.handlers.RotatingFileHandler',
#'filename': os.path.join(DJANGO_ROOT, 'survey.log'),
'filename': 'survey.log',
'maxBytes': 1024*1024*15, # 15MB
'backupCount': 10,
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'survey': {
'handlers': ['applogfile',],
'level': 'DEBUG',
},
}
}
EDIT
I have now discovered that my Django error logs are getting written to my Apache error logs. Is this in any way normal?
Running
sudo tail /var/log/apache2/error.log
Provides me with the expected print out that I should be getting in my above Django error log file. e.g.
[15/Dec/2014 21:36:07] DEBUG [survey:190] This is your images list in 7: ['P3D3.jpg', 'P1D1.jpg', 'P5D5.jpg']
You aren't using the correct logger in your views.py. Try this:
import logging
logger = logging.getLogger('survey')
logger.debug('This is a debug message.')
The logger you get must match the loggers defined in LOGGING.
My guess is that the logfile actually gets created but in a directory that you don't except it to be. Try to put a full path in your configuration. Something like 'filename': '/tmp/survey.log',, restart apache and check if it's there. Of course first you must apply a solution that Derek posted.
The reason you see the log message in your apache logs is probably because mod_wsgi configures somehow a default (root) logger and your configuration doesn't disable it (you have 'disable_existing_loggers': False,).
if you don't pass the full path of the filename, it is created in the directory the running process is started ( or make a sys/chdir), it depends on the operating system which dir is used
We have a website with several thousand requests per minute and I'd like to limit the maximum number of error admin mails that Django sends us, in case we have an error in our code.
On my local dev system, the following logging configuration works nicely. It sets a cache value, which expires after 10 seconds. An error that occurs within this time interval will not be reported at all. That's what we want. All of the following code is placed inside our project settings file.
def limit_error_interval(record):
from django.core.cache import cache
if not cache.get('last_error'):
cache.set('last_error', '1', 10) # 10 sec
return True
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'error_limit': {
'()': 'django.utils.log.CallbackFilter',
'callback': limit_error_interval
},
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['error_limit',],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins',],
'level': 'ERROR',
'propagate': True
},
}
}
However, the code fails on our production server (NGINX). limit_error_interval gets called, but every error is still sent by mail. Our cache works within the code, but it appears it's not working inside this function. Is there a problem using cache inside the settings?
This alternative approach/snippet has exactly the same problem: works on local dev system, but not in production.
class ErrorIntervalFilter(object):
def filter(self, record):
from django.core.cache import cache
if not cache.get('last_error'):
cache.set('last_error', '1', 10)
return True
LOGGING = {
'version': 1, 'disable_existing_loggers': False,
'handlers': {'mail_admins': {'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'filters': ['error_interval'],}},
'filters': {'error_interval': {'()': ErrorIntervalFilter,}},
'loggers': {'django.request':{'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True,},},
}
Any help or hint appreciated! :-) Merry Christmas!
Perhaps you are running several instances of the same application in the production environment (I guess the limit will work per instance).
In my experience, a log aggregation tool like sentry is so much better than receiving errors by email, that it is not worth chasing the cause of your problem. Just install sentry and be happy.