If I define a logger with 2 handlers in a module A
# module A
import logging
logger = logging.getLogger('PARSER')
logger.setLevel(logging.DEBUG)
# create file handler which logs even debug messages
fh = logging.FileHandler('glm_parser.log')
fh.setLevel(logging.DEBUG)
# create console handler with a higher log level
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
Then the module A is imported to other modules for multiple times.
However, each time A is imported, the code would be executed, so in the end I have 2 * x handlers, where x is the number of times A is imported.
How do you avoid this problem?
Related
I'm running Django 3.1 on Docker and I want to log to different files everyday. I have a couple of crons running and also celery tasks. I don't want to log to one file because a lot of processes will be writing and debugging/reading the file will be difficult.
If I have cron tasks my_cron_1, my_cron_2,my_cron_3
I want to be able to log to a file and append the date
MyCron1_2020-12-14.log
MyCron2_2020-12-14.log
MyCron3_2020-12-14.log
MyCron1_2020-12-15.log
MyCron2_2020-12-15.log
MyCron3_2020-12-15.log
MyCron1_2020-12-16.log
MyCron2_2020-12-16.log
MyCron3_2020-12-16.log
Basically, I want to be able to pass in a name to a function that will write to a log file.
Right now I have a class MyLogger
import logging
class MyLogger:
def __init__(self,filename):
# Gets or creates a logger
self._filename = filename
def log(self,message):
message = str(message)
print(message)
logger = logging.getLogger(__name__)
# set log level
logger.setLevel(logging.DEBUG)
# define file handler and set formatter
file_handler = logging.FileHandler('logs/'+self._filename+'.log')
#formatter = logging.Formatter('%(asctime)s : %(levelname)s: %(message)s')
formatter = logging.Formatter('%(asctime)s : %(message)s')
file_handler.setFormatter(formatter)
# add file handler to logger
logger.addHandler(file_handler)
# Logs
logger.info(message)
I call the class like this
logger = MyLogger("FirstLogFile_2020-12-14")
logger.log("ONE")
logger1 = MyLogger("SecondLogFile_2020-12-14")
logger1.log("TWO")
FirstLogFile_2020-12-14 will have ONE TWO but it should only have ONE
SecondLogFile_2020-12-14 will have TWO
Why is this? Why are the logs being written to the incorrect file? What's wrong with my code?
I am developing a REST service in python which is deployed as Lambda on AWS.Initially nothing was logged on Cloud Watch CLI so i introduced watchtower.
Below is my logging.in file.
[loggers]
keys=root
[handlers]
keys=screen, WatchtowerHandler
[formatters]
keys=logfileformatter
[logger_root]
level=DEBUG
handlers=screen
[logger_xyz]
level=DEBUG
handlers=screen, WatchtowerHandler
qualname=xyz
[formatter_logfileformatter]
format=%(asctime)s %(name)-12s: %(levelname)s %(message)s
class=logging.Formatter
[handler_logfile]
class=handlers.RotatingFileHandler
level=NOTSET
args=('log/xyz.log','a',100000,100)
formatter=logfileformatter
[handler_screen]
class=StreamHandler
args = (sys.stdout)
formatter=logfileformatter
[handler_WatchtowerHandler]
class=watchtower.CloudWatchLogHandler
formatter=formatter
send_interval=1
args= ()
the above works fine for logging config files.
LOG.info("dev config detected")
But not able to log the LOG.info() from any other code in the application.
specifically REST calls, whereas logging is same everywhere.
You could either use Watchtower or Cloudwatch Handler
Watchtower: https://github.com/kislyuk/watchtower
import watchtower, logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.addHandler(watchtower.CloudWatchLogHandler())
logger.info("Hi")
logger.info(dict(foo="bar", details={}))
Cloudwatch Handler: https://pypi.org/project/cloudwatch/
import logging
from cloudwatch import cloudwatch
#Create the logger
logger = logging.getLogger('my_logger')
#Create the formatter
formatter = logging.Formatter('%(asctime)s : %(levelname)s - %(message)s')
# ---- Create the Cloudwatch Handler ----
handler = cloudwatch.CloudwatchHandler('AWS_KEY_ID','AWS_SECRET_KEY','AWS_REGION','AWS_LOG_GROUP','AWS_LOG_STREAM')
#Pass the formater to the handler
handler.setFormatter(formatter)
#Set the level
logger.setLevel(logging.WARNING)
#Add the handler to the logger
logger.addHandler(handler)
#USE IT!
logger.warning("Watch out! Something happened!")
I like TimedRotatingFileHandler and I want to use it.
However my current logging is to my liking and includes some formatting, as well as logging in the console, like this:
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)-8s %(message)s',
datefmt=dtfmt,
filename=logfile,
filemode='w')
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M')
console.setFormatter(formatter)
logging.getLogger('').addHandler(console)
logger = logging.getLogger('logger')
logger.info('log something')
This returns something like this in a logfile:
2015-08-14 22:55:35 INFO log something
And this in the console:
08-14 22:56 INFO log something
How do I achieve this, but in addition have the log file rotating every midnight?
An old question, but since it was the to search result on Google when i wanted to do the same thing I'll add my result.
In essence you should not use basicConfig. Add both handlers manually.
import logging, logging.handlers
#get the root logger
rootlogger = logging.getLogger()
#set overall level to debug, default is warning for root logger
rootlogger.setLevel(logging.DEBUG)
#setup logging to file, rotating at midnight
filelog = logging.handlers.TimedRotatingFileHandler('./mylogfile.log',
when='midnight', interval=1)
filelog.setLevel(logging.DEBUG)
fileformatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
filelog.setFormatter(fileformatter)
rootlogger.addHandler(filelog)
#setup logging to console
console = logging.StreamHandler()
console.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M')
console.setFormatter(formatter)
rootlogger.addHandler(console)
#get a logger for my script
logger = logging.getLogger(__name__)
logger.info('This is logged to console and file')
logger.debug('This is only logged to file')
Similar questions were caused by the logger getting called twice. So maybe in my case it is caused by the second getLogger(). Any ideas how I can fix it?
import logging
import logging.handlers
logger = logging.getLogger("")
logger.setLevel(logging.DEBUG)
handler = logging.handlers.RotatingFileHandler(
"the_log.log", maxBytes=3000000, backupCount=2)
formatter = logging.Formatter(
'[%(asctime)s] {%(filename)s:%(lineno)d} %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
# This is required to print messages to the console as well as the log file.
logging.getLogger().addHandler(logging.StreamHandler())
Using a config file. e.g. logging.config.fileConfig('logging.ini')
[logger_root]
level=ERROR
handlers=stream_handler
[logger_webserver]
level=DEBUG
handlers=stream_handler
qualname=webserver
propagate=0
You have to set logger.propagate = 0 (Python 3 Docs) when you're configuring the root logger and using non-root-loggers at the same time.
I know this was asked a long time ago, but it's the top result on DuckDuckGo.
In an IPython notebook cell I wrote:
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
handler = logging.FileHandler('model.log')
handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
Notice that I am supplying a file name, but not a path.
Where could I find that log? (ran a 'find' and couldn't locate it...)
There's multiple ways to set the IPython working directory. If you don't set any of that in your IPython profile/config, environment or notebook, the log should be in your working directory. Also try $ ipython locate to print the default IPython directory path, the log may be there.
What about giving it an absolute file path to see if it works at all?
Other than that the call to logging.basicConfig doesn't seem to do anything inside an IPython notebook:
# In:
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger()
logger.debug('root debug test')
There's no output.
As per the docs, the logging.basicConfig doesn't do anything if the root logger already has handlers configured for it. This seems to be the case, IPython apparently already has the root logger set up. We can confirm it:
# In:
import logging
logger = logging.getLogger()
logger.handlers
# Out:
[<logging.StreamHandler at 0x106fa19d0>]
So we can try setting the root logger level manually:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.debug('root debug test')
which yields a formatted output in the notebook:
Now onto setting up the file logger:
# In:
import logging
# set root logger level
root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)
# setup custom logger
logger = logging.getLogger(__name__)
handler = logging.FileHandler('model.log')
handler.setLevel(logging.INFO)
logger.addHandler(handler)
# log
logger.info('test info my')
which results in writing the output both to the notebook and the model.log file, which for me is located in a directory I started IPython and notebook from.
Mind that repeated calls to this piece of code without restarting the IPython kernel will result in creating and attaching yet another handler to the logger on every run and the number of messages being logged to the file with each log call will grow.
Declare the path of the log file in the basicConfig like this :
log_file_path = "/your/path/"
logging.basicConfig(level = logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
filename = log_file_path,
filemode = 'w')
You can then start logging and why not add a different log format to the console if you want :
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
console.setLevel(logging.DEBUG)
# set a format which is simpler for console use
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
# add the handler to the root logger
logging.getLogger().addHandler(console)
logger = logging.getLogger()
et voilĂ .