I deployed django app on IIS, however my logging code that was working perfectly on local host, caused server 500 error...
Can I get any help please?
LOGGING = {
'version': 1,
'loggers': {
'django': {
'handlers': ['debuglog'],
'level': 'DEBUG'
},
'django.server': {
'handlers': ['errorlog'],
'level': 'ERROR'
}
},
'handlers': {
'debuglog': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': './logs/debug.log',
'formatter': 'simple',
},
'errorlog': {
'level': 'ERROR',
'class': 'logging.FileHandler',
'filename': './logs/error.log',
'formatter': 'simple',
}
},
'formatters': {
'simple': {
'format': '{levelname} {message}',
'style': '{',
}
}
}
Maybe IIS does not allow django to create log files and it needs permission to do so? If this is the case, how would I do that?
you can follow the below steps to deploy the Django site in iis:
1)Add the site with your project root path in iis.
2)Select site-> Doble clcik handler mappings. Click “Add Module Mapping…” from the actions selection on the right.
3)Set request path to * and Module to FastCgiModule.
set executable:
C:\Python39\Scripts\python.exeC:\Python39\Lib\site-packages\wfastcgi.py
4)Now back to the server node and select the fast cgi module.
add below environment variable:
DJANGO_SETTINGS_MODULE = “path to your settings”
WSGI_HANDLER = django.core.wsgi.get_wsgi_application()
PYTHONPATH = “path to your repo-root e.g. C:\inetpub\wwwroot\djangosite”
Do not forget to assign iis_iusrs and iusr permission to the site root folder and python folder.
check the log folder permission too.
https://learn.microsoft.com/en-us/visualstudio/python/configure-web-apps-for-iis-windows?view=vs-2019
Related
Below is the logging format I have for a DRF application in azure app service. I tried using Timed Rotating File handler but I was not able to save the logs with that option. Also, when ever the app service restarts, the previous logs are getting erased. Is there a way to maintain day wise logs in azure app service.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {
'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt' : "%d/%b/%Y %H:%M:%S"
},
},
'filters':{
'error_mails': {
'()': 'django.utils.log.CallbackFilter',
'callback':'app.log.CustomExceptionReporter'
},
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse',
},
},
'handlers': {
'logfile': {
'level':'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': f'{BASE_DIR}/app_log.log',
'maxBytes': 9000000,
'backupCount': 10,
'formatter': 'standard'
},
'console':{
'level':'INFO',
'class':'logging.StreamHandler',
'formatter': 'standard'
},
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'include_html' :True,
'reporter_class':'app.log.CustomExceptionReporter',
# 'filters':['require_debug_false',]
},
},
'loggers': {
'django': {
'handlers':['logfile','mail_admins'],
'propagate': True,
'level':'ERROR',
},
'django.request':{
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'app': {
'handlers': ['logfile','console'],
'level': 'INFO',
'propogate': True,
}
}
}
Is there a way to maintain day-wise logs in the azure app service.
Currently, it is not possible to save a log day-wise. By default, App Service the log will be saved in LogFiles\Application\. And we were not able to change the log file name also.
Application logging
File System - Save the logs into the filesystem. This setting will only stay enabled for 12 hours
Azure Blob Storage- You can also save the logs to Azure Blob Storage, even if you also save the logs to the file system(This feature is not available for Linux App service)
You can also define a level of verbosity for the application logging to catch. This allows you to filter logging data into Error, Warning, Information, or Verbose categories. All information that you log will be caught by the Verbose value. Refer Azure App Service Log files.
whenever the app service restarts, the previous logs are getting erased
You have to take a backup of your App so that we can get all information (Configuration, log files, App details) about App Service. Refer App Service Backup
I have encountered a strange behavior of Django Loggers.
I am developing a front end application using Django. During the login service, I make some requests to certain components and use log.warning() calls to see the flow of the requests.
The logs worked perfectly, until I decided to add a LOGGING configuration to print the output of the logs in a file, as I want to deploy the application via Docker and I want to periodically check the log files.
When I added the following Django configuration concerning logging:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'detailed': {
'class': 'logging.Formatter',
'format': "[%(asctime)s] - [%(name)s:%(lineno)s] - [%(levelname)s] %(message)s",
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'level': 'INFO',
'formatter': 'detailed',
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': "{}/am.log".format(BASE_DIR),
'mode': 'w',
'formatter': 'detailed',
'level': 'INFO',
'maxBytes': 2024 * 2024,
'backupCount': 5,
},
},
'loggers': {
'am': {
'level': 'INFO',
'handlers': ['console', 'file']
},
}
}
The logging stops working. The file specified in the logging configuration, am.log, is indeed created but nothing gets printed to this file. Even the console logging does not take place.
I have taken this logging configuration from one of my Django projects for the backend of this application, and there it works perfectly. I really don't understand what I am doing wrong. Could you please help me or guide me in the right direction. I would be very grateful.
I wish you all a good day!
By using the key "am" in your 'loggers' configuration, you're defining one logger with name "am":
'loggers': {
'am': { # <-- name of the logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
So to use that logger, you have to get it by that name:
logger = logging.getLogger("am")
logger.warning("This is a warning")
If you name your loggers by the name of the module in which you're running, which is recommended practice, then you need to define each module logger:
logger = logging.getLogger(__name__) # <-- this logger will be named after the module, e.g. your app name.
Then in your logging configuration you can specify logging behavior per module (per app):
'loggers': {
'my_app': { # <-- logging for my app
'level': 'INFO',
'handlers': ['console', 'file']
},
'django': { # <-- logging for Django module
'level': 'WARNING',
'handlers': ['console', 'file']
},
}
Or if you just want to log everything the same, use the root ('') logger, which doesn't have a name, just empty string:
'loggers': {
'': { # <-- root logger
'level': 'INFO',
'handlers': ['console', 'file']
},
}
I'm trying to configure django logging in the django settings file so that it logs django info and info for my application to a custom file for easy viewing. Here's my logging config:
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
'file': {
# exact format is not important, this is the minimum information
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
},
},
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'formatter': 'file',
'filename': 'logs/django_log.log',
'backupCount': 10, # keep at most 10 log files
'maxBytes': 5242880, # 5*1024*1024 bytes (5MB)
},
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'console',
},
},
'loggers': {
'django': {
'handlers': ['file', 'console'],
'level': 'INFO',
'propagate': True,
},
'py.warnings': {
'handlers': ['console'],
},
'my_application': {
'level': 'INFO',
'handlers': ['file', 'console'],
# required to avoid double logging with root logger
'propagate': False,
},
},
}
This works on my local manage.py test server, both with django events appearing and events that I log, initialized with my_application as the logger name. However, on my web server, the logging file is created and, oddly, only populated with occasional django WARNING messages. So, there is no permissions error or inability to access the log file. Since the same config works on my local, the config can't be the issue, and it clearly has only INFO level logs.
My server setup is taken from this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04 and uses Gunicorn with Nginx as a reverse proxy. Could the issue be with the configs for these? I'm stumped.
Additionally, where's a good best practice place to store this django log file?
Also, one related bonus question: What's a good best practice free/cheap service that can notify you if a specific error is logged? It seems like a good idea to set something like that up, but I don't think the django emailer is necessarily the most elegant or best.
In my project i deployed my django-project with tornado server, and my tornado main function is:
def main():
tornado.options.options.logging = None
tornado.options.parse_command_line()
os.environ['DJANGO_SETTINGS_MODULE'] = 'Zero.settings'
application = get_wsgi_application()
container = tornado.wsgi.WSGIContainer(application)
http_server = tornado.httpserver.HTTPServer(container, xheaders=True)
http_server.listen(tornado.options.options.port)
tornado.ioloop.IOLoop.current().start()
When i use tornado.options.options.logging = None to disabled tornado logging output, but it still output the log message in my console with twice, my django logging config is:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'standard': {
'format': '%(asctime)s [%(threadName)s] [%(name)s:%(funcName)s] [%(levelname)s]- %(message)s'}
},
'filters': {
},
'handlers': {
'error': {
'level': 'ERROR',
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(BASE_DIR, 'log', 'error.log'),
'maxBytes': 1024*1024*5,
'backupCount': 5,
'formatter': 'standard',
},
'console':{
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'standard'
},
},
'loggers': {
'django': {
'handlers': ['console'],
'level': 'INFO',
'propagate': True
},
}
}
The final result is:
2018-06-15 17:40:55,724 [MainThread] [base_views:get] [INFO]- get message correct
INFO:base_views:get message correct
so what can i do to solve this problem.
Thank U.
You've only configured the django logger, not the root logger. When Tornado sees that the root logger is not configured, it adds its own last-resort handler (using logging.basicConfig instead of tornado.log). Because of the way python loggers propagate, this results in any other loggers being duplicated.
When you use tornado.options.options.logging = None, you should make sure that you configure the root logger yourself, or configure all of your other handlers with propagate=False. In this case, move the loggers.django section of your config to a new root section:
'handlers': {...}
'root': {
'handlers': ['console'],
'level': 'INFO',
},
I'm very confused. I've searched through many similar threads and tried all of the proposed solutions and nothing works.
I'm trying to setup logging in my Django application. I can get any direct logger calls (logger.error() etc) to write to my file from the application or the console, but the django.request simply won't write. On startup both files are created and have correct permissions. I've tried django-requestlogging and multiple other things, changing disable exisiting loggers, propagate etc and nothing works. This is my logging setup right now:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format' : "[%(asctime)s] %(levelname)s [%(name)s:%(lineno)s] %(message)s",
'datefmt' : "%d/%b/%Y %H:%M:%S"
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': '/srv/hawthorn/logs/debug.log',
'formatter': 'verbose'
},
'request_file': {
'level': 'INFO',
'class': 'logging.FileHandler',
'filename': '/srv/hawthorn/logs/request.log',
'formatter': 'verbose'
},
},
'loggers': {
'django.request': {
'handlers':['request_file'],
'level':'INFO',
},
'audio': {
'handlers': ['file'],
'level': 'INFO',
},
}
}
If I even try to change the logfile path to relative, like 'logs/debug.log' the file gets created but stops getting written to (has the correct permissions). My project looks like:
hawthorn/
audio/ (views are here)
admin/
hawthorn/ (settings.py is here)
Interesting note that almost any configuration works when trying from the python console. I could get django-requestlogging from the console (at least calling logger.info() ) but not from hitting my app in a browser (which also calls logger.info() ).
I'm running Django 1.6.1 and python 2.7.6
What am I missing?? All I want is a log of the requests as they come in. I'd really appreciate the help.
Calling it like:
import logging
logger = logging.getLogger(__name__)
then just logger.debug("test") in the view.
EDIT:
So, turns out the requests are being logged to the request.log file, but only warning and errors. I changed all levels of the config to debug, why won't it log all requests?