This snippet creates a logging filter that puts ERROR level and above into the console and DEBUG and above into the log file. What I can't seem to figure out is how to reuse that config across my various modules so that I'm writing to the same logfile, but the name correctly indicates the module that generated the message.
Thanks in advance!
import logging
default_formatter = logging.Formatter(
"%(asctime)s:%(name)s:%(levelname)s:%(message)s")
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
console_handler.setFormatter(default_formatter)
file_handler = logging.FileHandler("error.log", "a")
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(default_formatter)
noralog = logging.getLogger(__name__)
noralog.setLevel(logging.DEBUG)
noralog.addHandler(console_handler)
noralog.addHandler(file_handler)
noralog.debug('PUT ME ONLY IN THE FILE')
noralog.error('STREAM AND FILE')
This seems to work, but I'm not sure it's the best solution:
#noralog.py
def setup_logging(localname):
import logging
default_formatter = logging.Formatter(
"%(asctime)s:%(name)s:%(levelname)s:%(message)s")
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
console_handler.setFormatter(default_formatter)
file_handler = logging.FileHandler("error.log", "a")
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(default_formatter)
noralog = logging.getLogger(localname)
noralog.setLevel(logging.DEBUG)
noralog.addHandler(console_handler)
noralog.addHandler(file_handler)
return noralog
#othermod.py
import noralog
ff = noralog.setup_logging(__name__)
ff.debug('PUT THIS ONLY IN FILE LOG')
ff.error('PUT THIS IN STREAM AND FILE LOG')
Related
I am struggling to enable DEBUG logging for a Glue script using PySpark only.
I have tried:
import...
def quiet_logs(sc):
logger = sc._jvm.org.apache.log4j
logger.LogManager.getLogger("org").setLevel(logger.Level.ERROR)
logger.LogManager.getLogger("akka").setLevel(logger.Level.ERROR)
def main():
# Get the Spark Context
sc = SparkContext.getOrCreate()
sc.setLogLevel("DEBUG")
quiet_logs(sc)
context = GlueContext(sc)
logger = context.get_logger()
logger.debug("I only want to see this..., and for all others, only ERRORS")
...
I have '--enable-continuous-cloudwatch-log' set to true, but simply cannot get the log trail to only write debug messages for my own script.
I haven't managed to do exactly what you want, but I was able to do something similar by setting up a separate custom log, and this might achieve what you're after.
import os
from watchtower import CloudWatchLogHandler
import logging
args = getResolvedOptions(sys.argv,["JOB_RUN_ID"])
job_run_id = args["JOB_RUN_ID"]
os.environ["AWS_DEFAULT_REGION"] = "eu-west-1"
lsn = f"{job_run_id}_custom"
cw = CloudWatchLogHandler(
log_group="/aws-glue/jobs/logs-v2", stream_name=lsn, send_interval=4
)
slog = logging.getLogger()
slog.setLevel(logging.DEBUG)
slog.handlers = []
slog.addHandler(cw)
slog.info("hello from the custom logger")
Now anything you log to slog will go into a separate logger accessible as one of the entries in the 'output' logs
Note you need to include watchtower as a --additional-python-modules when you run the glue job
More info here
This should be regular logging setup in Python.
I have tested this in a Glue job and only one debug-message was visible. The job was configured with "Continuous logging" and they ended up in "Output logs"
import logging
# Warning level on root
logging.basicConfig(level=logging.WARNING, format='%(asctime)s [%(levelname)s] [%(name)s] %(message)s')
logger = logging.getLogger(__name__)
# Debug level only for this logger
logger.setLevel(logging.DEBUG)
logger.debug("DEBUG_LOG test")
You can also mute specific loggers:
logging.getLogger('botocore.vendored.requests.packages.urllib3.connectionpool').setLevel(logging.WARN)
I'm running Django 3.1 on Docker and I want to log to different files everyday. I have a couple of crons running and also celery tasks. I don't want to log to one file because a lot of processes will be writing and debugging/reading the file will be difficult.
If I have cron tasks my_cron_1, my_cron_2,my_cron_3
I want to be able to log to a file and append the date
MyCron1_2020-12-14.log
MyCron2_2020-12-14.log
MyCron3_2020-12-14.log
MyCron1_2020-12-15.log
MyCron2_2020-12-15.log
MyCron3_2020-12-15.log
MyCron1_2020-12-16.log
MyCron2_2020-12-16.log
MyCron3_2020-12-16.log
Basically, I want to be able to pass in a name to a function that will write to a log file.
Right now I have a class MyLogger
import logging
class MyLogger:
def __init__(self,filename):
# Gets or creates a logger
self._filename = filename
def log(self,message):
message = str(message)
print(message)
logger = logging.getLogger(__name__)
# set log level
logger.setLevel(logging.DEBUG)
# define file handler and set formatter
file_handler = logging.FileHandler('logs/'+self._filename+'.log')
#formatter = logging.Formatter('%(asctime)s : %(levelname)s: %(message)s')
formatter = logging.Formatter('%(asctime)s : %(message)s')
file_handler.setFormatter(formatter)
# add file handler to logger
logger.addHandler(file_handler)
# Logs
logger.info(message)
I call the class like this
logger = MyLogger("FirstLogFile_2020-12-14")
logger.log("ONE")
logger1 = MyLogger("SecondLogFile_2020-12-14")
logger1.log("TWO")
FirstLogFile_2020-12-14 will have ONE TWO but it should only have ONE
SecondLogFile_2020-12-14 will have TWO
Why is this? Why are the logs being written to the incorrect file? What's wrong with my code?
I am very new to programming. I am working on a pipeline to analyze DMARC report files that are sent to my email account, that I am manually placing in an s3 bucket. The goal of this task is to download, extract, and analyze files using parsedmarc: https://github.com/domainaware/parsedmarc The part I'm having difficulty with is setting a conditional statement to extract .gz files if the target file is not a .zip file. I'm assuming the gzip library will be sufficient for this purpose. Here is the code I have so far. I'm using python3 and the boto3 library for AWS. Any help is appreciated!
import parsedmarc
import pprint
import json
import boto3
import zipfile
import gzip
pp = pprint.PrettyPrinter(indent=2)
def main():
#Set default session profile and region for sandbox account. Access keys are pulled from /.aws/config and /.aws/credentials.
#The 'profile_name' value comes from the header for the account in question in /.aws/config and /.aws/credentials
boto3.setup_default_session(region_name="aws-region-goes-here")
boto3.setup_default_session(profile_name="aws-account-profile-name-goes-here")
#Define the s3 resource, the bucket name, and the file to download. It's hardcoded for now...
s3_resource = boto3.resource(s3)
s3_resource.Bucket('dmarc-parsing').download_file('source-dmarc-report-filename.zip' '/home/user/dmarc/parseme.zip')
#Use the zipfile python library to extract the file into its raw state.
with zipfile.ZipFile('/home/user/dmarc/parseme.zip', 'r') as zip_ref:
zip_ref.extractall('/home/user/dmarc')
#Ingest all locations for xml file source
dmarc_report_directory = '/home/user/dmarc/'
dmarc_report_file = 'parseme.xml'
"""I need an if statement here for extracting .gz files if the file type is not .zip. The contents of every archive are .xml files"""
#Set report output variables using functions in parsedmarc. Variable set to equal the output
pd_report_output=parsedmarc.parse_aggregate_report_file(_input=f"{dmarc_report_directory}{dmarc_report_file}")
#use jsonify to make the output in json format
pd_report_jsonified = json.loads(json.dumps(pd_report_output))
dkim_status = pd_report_jsonified['records'][0]['policy_evaluated']['dkim']
spf_status = pd_report_jsonified['records'][0]['policy_evaluated']['spf']
if dkim_status == 'fail' or spf_status == 'fail':
print(f"{dmarc_report_file} reports failure. oh crap. report:")
else:
print(f"{dmarc_report_file} passes. great. report:")
pp.pprint(pd_report_jsonified['records'][0]['auth_results'])
if __name__ == "__main__":
main()
Here is the code using the parsedmarc.parse_aggregate_report_xml method I found. Hope this helps others in parsing these reports:
import parsedmarc
import pprint
import json
import boto3
import zipfile
import gzip
pp = pprint.PrettyPrinter(indent=2)
def main():
#Set default session profile and region for account. Access keys are pulled from ~/.aws/config and ~/.aws/credentials.
#The 'profile_name' value comes from the header for the account in question in ~/.aws/config and ~/.aws/credentials
boto3.setup_default_session(profile_name="aws_profile_name_goes_here", region_name="region_goes_here")
source_file = 'filename_in_s3_bucket.zip'
destination_directory = '/tmp/'
destination_file = 'compressed_report_file'
#Define the s3 resource, the bucket name, and the file to download. It's hardcoded for now...
s3_resource = boto3.resource('s3')
s3_resource.Bucket('bucket-name-for-dmarc-report-files').download_file(source_file, f"{destination_directory}{destination_file}")
#Extract xml
outputxml = parsedmarc.extract_xml(f"{destination_directory}{destination_file}")
#run parse dmarc analysis & convert output to json
pd_report_output = parsedmarc.parse_aggregate_report_xml(outputxml)
pd_report_jsonified = json.loads(json.dumps(pd_report_output))
#loop through results and find relevant status info and pass fail status
dmarc_report_status = ''
for record in pd_report_jsonified['records']:
if False in record['alignment'].values():
dmarc_report_status = 'Failed'
#************ add logic for interpreting results
#if fail, publish to sns
if dmarc_report_status == 'Failed':
message = "Your dmarc report failed a least one check. Review the log for details"
sns_resource = boto3.resource('sns')
sns_topic = sns_resource.Topic('arn:aws:sns:us-west-2:112896196555:TestDMARC')
sns_publish_response = sns_topic.publish(Message=message)
if __name__ == "__main__":
main()
I want to create a zip file of logging file. I have created logging file using python module logging and RotatingFileHandler.
import logging
from logging.handlers import RotatingFileHandler
# create a logging format
log_formatter = logging.Formatter('Date: %(asctime)s - %(message)s')
logFile = scheduler_name + "_"+ scheduler_id+".log"
# create a file handler
myhandler = RotatingFileHandler(logFile, mode='a', maxBytes=5*1024*1024,
backupCount=2, encoding=None, delay=0)
myhandler.setFormatter(log_formatter)
myhandler.setLevel(logging.INFO)
# add the handlers to the logger
app_log = logging.getLogger()
app_log.addHandler(myhandler)
Using that I have created a logging file and I want to create zip file using logging module inbuilt functionality
I didnt try, but possibly it should be like this. Take care of dst_file_name to generate dynamically just like your 'logFile'
import commands
myhandler.doRollover()
for i in range(self.backupCount, 0, -1):
dst_file_name='myzip.zip'
src_file_name=scheduler_name + "_"+i+".log"
cmd = "zip %s %s" %(dst_file_name, src_file_name)
commands.getoutput(cmd)
enter code here
I am following an example to read a config file from the following: https://wiki.python.org/moin/ConfigParserExamples
But I get a KeyError and can't figure out why. It is reading the files and I can even print the sections. I think I am doing something really stupid. Any help greatly appreciated.
Here is the code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import ConfigParser
import logging
config_default=ConfigParser.ConfigParser()
class Setting(object):
def get_setting(self, section, my_setting):
default_setting = self.default_section_map(section)[my_setting]
return default_setting
def default_section_map(self,section):
dict_default = {}
config_default.read('setting.cfg')
sec=config_default.sections()
options_default = config_default.options(section)
logging.info('options_default: {0}'.format(options_default))
for option in options_default:
try:
dict_default[option] = config_default.get(section, option)
if dict_default[option] == -1:
print("skip: %s" % option)
except:
print("exception on %s!" % option)
dict_default[option] = None
return dict_default
return complete_path
if __name__ == '__main__':
conf=Setting()
host=conf.get_setting('mainstuff','name')
#host=conf.setting
print 'host setting is :' + host
My config file is named setting.cfg and looks like this:
[mainstuff]
name = test1
domain = test2
[othersection]
database_ismaster = no
database_master = test3
database_default = test4
[mysql]
port = 311111
user = someuser
passwd = somecrazylongpassword
[api]
port = 1111
And the Error is this...
exception on domain! Traceback (most recent call last): File
"./t.py", line 51, in
host=conf.get_setting('mainstuff','name') File "./t.py", line 14, in get_setting
default_setting = self.default_section_map(section)[my_setting] KeyError: 'name'
Be sure that your complete file path is setting.cfg. If you put your file in another folder or if it is named different, Python is going to report the same KeyError.
you have no general section. in order to get the hostname you need something like
[general]
hostname = 'hostname.net'
in your setting.cfg. now your config file matches the program -- maybe you prefer to adapt your porgram to match the config file? ...this should get you started at least.
UPDATE:
as my answer is useless now, here is something you could try to build on (assuming it works for you...)
import ConfigParser
class Setting(object):
def __init__(self, cfg_path):
self.cfg = ConfigParser.ConfigParser()
self.cfg.read(cfg_path)
def get_setting(self, section, my_setting):
try:
ret = self.cfg.get(section, my_setting)
except ConfigParser.NoOptionError:
ret = None
return ret
if __name__ == '__main__':
conf=Setting('setting.cfg')
host = conf.get_setting('mainstuff', 'name')
print 'host setting is :', host
This error occurs mainly due to 2 reasons:
Issue in reading the config file due to not getting the proper path. Absolute path may be used. Try reading the config file first whether any issue.
f = open("config.ini", "r")
print(f.read())
Not been able to find mentioned section in config file.