I am struggling with Django logging configuration. I have one app called "api" and I want to save to file all logs from this app. When I set up a logger to django everything works fine but when I change it to my app_name it doesn't.
Here is my configuration:
File structure:
email_api
api
tasks.py
email_api
celery.py
settings
logs
email.log
My logging configuration:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': 'logs/email.log',
},
},
'loggers': {
'api': {
'handlers': ['file'],
'level': 'DEBUG',
'propagate': True,
},
},
}
tasks.py file where I logging:
import logging
logger = logging.getLogger(__name__)
#app.task(bind=True, default_retry_delay=3, max_retries=3,)
def send_email(self, data, email_id):
message = create_message(data, email)
try:
logger.debug("Log Message Here")
message.send()
Keys in the LOGGING['loggers'][...] dict are names of loggers. You have configured logging with api as a name of the logger.
In order to write to this logger, you should request it by that name:
logger = logging.getLogger('api')
...
logger.debug("Log Message Here")
Related
I am trying to access the logs from my django app in GCP logging. I have thus far been unsuccessful.
Here is my logging config:
client = gcp_logging.Client.from_service_account_json(
json_credentials_path='logging_service_account.json')
client.setup_logging()
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'standard': {'format': '%(levelname)s : %(message)s - [in %(pathname)s:%(lineno)d]'},
'short': {'format': '%(message)s'}
},
'handlers': {
'stackdriver': {
'formatter': 'standard',
'class': 'google.cloud.logging.handlers.CloudLoggingHandler',
'client': client
},
'requestlogs_to_stdout': {
'class': 'logging.StreamHandler',
'filters': ['request_id_context'],
}
},
'filters': {
'request_id_context': {
'()': 'requestlogs.logging.RequestIdContext'
}
},
'loggers': {
'StackDriverHandler': {
'handlers': ['stackdriver'],
'level': "DEBUG"
},
'django.request': {
'handlers': ['stackdriver']
},
'requestlogs': {
'handlers': ['requestlogs_to_stdout'],
'level': 'INFO',
'propagate': False,
},
},
}
I invoke the logs along the lines of:
import logging
logger = logging.getLogger('StackDriverHandler')
class OrganisationDetail(generics.RetrieveUpdateDestroyAPIView):
///
def patch(self, request, pk, format=None):
try:
///
if serializer.is_valid():
serializer.save()
logger.info(f"PATCH SUCCESSFUL: {serializer.data}")
return Response(serializer.data)
logger.warning(f"PATCH Failed: {serializer.errors}")
return JsonResponse(serializer.errors, status=400)
except Exception as e:
logger.error(f"PATCH Failed with exception: {e}")
return JsonResponse({'error': str(e)}, status=500)
In GCP, I set up a service account, enabled logging api and gave the SA write logs and monitor metrics permissions.
I then made a secret to contain my service_account key, and in my cloud-build.yaml file I run a step like this:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', "gcloud secrets versions access latest --secret=<secret_name> --format='get(payload.data)' | tr '_-' '/+' | base64 -d > logging_service_account.json" ]
The above step should:
Fetch the secret
Write it to a json file in the app instance container that can be accessed by my settings.py file with gcp_logging.Client.from_service_account_json( json_credentials_path='logging_service_account.json')
Perhaps there is a more straight forward way to achieve this, but it feels like it should work. Any help would be much appreciated. Thanks
After all the steps above, when I visit the logging service on my gcp console, I only see the one log under my logging service account that says it is created, none of the logs from my actual django app.
I just set up new server and while I'm checking it, I got 500 error. I expect that debug.log file has error message, However, when I check the file the file was empty. Nothing was written. So I changed the loggers settings may times, but still the file is empty and I can't fix the error because what is wrong with it...
This is my views.py
logger = logging.getLogger(__name__)
put this line to do logging.
settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'formatters': {
'simple': {
'format': '%(asctime)s %(filename)s:%(lineno)d %(message)s',
}
},
'handlers': {
'file': {
'level': 'DEBUG',
'filters': ['require_debug_false'],
'class': 'logging.handlers.RotatingFileHandler',
'filename': '/var/log/service/debug.log',
'formatter': 'simple',
},
},
'loggers': {
'django': {
'handlers': ['file'],
'level': 'ERROR',
'propagate': True,
},
}
}
if DEBUG:
del LOGGING['loggers']['django']
del LOGGING['handlers']['file']
if not os.path.exists('log'):
os.makedirs('log')
I attached my wsgi.py
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
application = get_wsgi_application()
This is settings.py where import local settings.
try:
from local_settings import *
except ImportError:
raise ImportError('You must create local_settings.py on project root')
This is local_settings.py
DEBUG=False
You don't have logger named views in your settings. It's only logger named django there. So try to use it in views.py:
logger = logging.getLogger('django')
what is the setting of DEBUG on your server?
because:
if DEBUG:
del LOGGING['loggers']['django']
del LOGGING['handlers']['file']
if DEBUG is True you are deleting the logger and handler...
I've configured my logging as so:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'console': {
'class': 'logging.StreamHandler',
},
'cute':{
'class': 'logging.handlers.SocketHandler',
'host': '127.0.0.1',
'port': 19996
},
},
'loggers': {
'django': {
'handlers': ['cute'],
'level': os.getenv('DJANGO_LOG_LEVEL', 'INFO'),
},
},
}
But when I try and log out I get an error in the console:
TypeError: an integer is required (got type socket)
This seems to be happening within an attempt to pickle the log message, I think.
What is going on, and how can I get the SocketHandler to work?
There is a bug report about it. The request object cannot be pickled and the log fails.
My loggers were like yours before I got the same error as you did. Since my code contains apps that don't work with request, I partially fixed my problem creating a log for django.request without socket_handler
'django.request': {
'handlers': ['defaultfile', 'console'],
'level': 'WARNING',
'propagate': False,
},
However the bug report also suggest to create a custom SocketHandler removing request:
from logging.handlers import SocketHandler as _SocketHandler
class DjangoSocketHandler(_SocketHandler):
def emit(self, record):
if hasattr(record, 'request'):
record.request = None
return super().emit(record)
I haven't tried this yet, but it could be a way to go.
I'm using Django with Celery in order to execute periodic tasks and Raven as a sentry client.
So far I've managed to run several apps with celery beat and all worked fine.
For some reason, In a recent app I'm working on, when I'm settings the root logger to use a 'sentry' hander, the periodic tasks are not running.
When settings the root logger to only use 'console' handler, it does work.
I can't wrap my head around what causing this issue.
This is my logging dict:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'root': {
'level': os.environ.get('LOG_LEVEL','INFO'),
'handlers': ['console'],
},
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'sentry': {
'level': 'WARNING',
'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
},
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'raven': {
'level': 'WARNING',
'handlers': ['console','sentry'],
'propagate': True,
},
'celery': {
'handlers': ['console'],
'level': 'DEBUG',
},
}
}
And the env var controlling the root logger handlers:
ENABLE_SENTRY = os.environ.get('ENABLE_SENTRY', 'FALSE') == 'TRUE'
if (ENABLE_SENTRY):
LOGGING['root']['handlers'] = ['console','sentry']
Note: Seems like the root logger doesn't log to console after that change
This is how I run the celery beat and worker:
python manage.py celery worker -E -B --maxtasksperchild=1000
--concurrency=10 --loglevel=DEBUG -Ofair
This is part of the packages I'm using:
celery==3.1.17 django-celery==3.1.16 raven==5.0.0 Django==1.8.7
This is my celery.py file:
"""
This module will hold celery configuration
"""
from __future__ import absolute_import
from django.conf import settings
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ltg_backend_app.settings')
# init the celery app
app = Celery('ltg_backend_app')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
Any help would be greatly appreciated!
did you configure raven to trap the necessary signals needed in order to work with Celery? here's how i configure my Celery app:
import celery
from django.conf import settings
import raven
from raven.contrib.celery import register_signal, register_logger_signal
class Celery(celery.Celery):
def on_configure(self):
#
# https://docs.sentry.io/clients/python/integrations/celery/
#
if 'dsn' in settings.RAVEN_CONFIG and settings.RAVEN_CONFIG['dsn']:
client = raven.Client(settings.RAVEN_CONFIG['dsn'])
else:
client = raven.Client() # should do nothing
# register a custom filter to filter out duplicate logs
register_logger_signal(client)
# hook into the Celery error handler
register_signal(client)
app = Celery('foobar')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
see here for more config options and details: https://docs.sentry.io/clients/python/integrations/celery/
I'm use django 1.4.6, I want to use the logger module integrated with django to output the response content, however, I cannot see it in the log file.
Source sample shown here:
import logging
logger = logging.getLogger('__file__')
...
response = redirect(url)
logger.debug(response.content)
return response
Once you have configured your loggers, handlers, filters and formatters,
You need to call it as follows:
import logging
# Standard instance of a logger with __name__
stdlogger = logging.getLogger(__name__)
logger.debug(response.content)
response = redirect(url)
return response
The call to logging.getLogger() obtains (creating, if necessary) an instance of a logger. The logger instance is identified by a name. This name is used to identify the logger for configuration purposes.
By convention, the logger name is usually name .
The Python name syntax used for getLogger automatically assigns the package name as the logger name.
Please show peple configuration of logging in static file
I have little change for your code
logger = logging.getLogger(__name__)
Log configuation on settings.py
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': "[%(asctime)s] %(levelname)s %(message)s",
'datefmt': "%d/%b/%Y %H:%M:%S"
}
},
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'filename': '/var/log/django_practices.log',
'formatter': 'verbose'
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'stream': sys.stdout,
'formatter': 'verbose'
},
},
'loggers': {
'name_your_app_django': {
'handlers': ['file', 'console'],
'level': ode'DEBUG',
}
}
}
In my configuration log will be print in console and file name.
Note : name_your_app_django change to fix with your code.