I'm using Python's logging module to log some debug strings to a file that give me good log
I use this script attached but it don't give me any output to screen
how can I show all print in the screen and print it to the log with the formatt ?
any help?
class StreamToLogger(object):
"""
Fake file-like stream object that redirects writes to a logger instance.
"""
def __init__(self, logger, log_level=logging.INFO):
self.logger = logger
self.log_level = log_level
self.linebuf = ''
def write(self, buf):
for line in buf.rstrip().splitlines():
self.logger.log(self.log_level, line.rstrip())
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',
filename="out.log",
filemode='a'
)
stdout_logger = logging.getLogger('STDOUT')
sl = StreamToLogger(stdout_logger, logging.INFO)
sys.stdout = sl
#stderr_logger = logging.getLogger('STDERR')
#sl = StreamToLogger(stderr_logger, logging.ERROR)
#sys.stderr = sl
Related
I have a flask application, which calls 2 functions from 2 different files.
I've setup logging to 2 different files.
However, the logging always seems to append to one file (whatever end point is hit first)
Here is the structure of the files -
app.py
from file1 import fun1
from file2 import fun2
#app.route("/end_point1")
def data_1:
return fun1()
#app.route("/end_point2")
def data_2:
return fun2()
file1.py
import logging
def fun1:
logging.basicConfig(filename = "file1.log")
logging.info("Logging function 1 details to to file1")
return foo
file2.py
def fun2:
logging.basicConfig(filename = "file2.log")
logging.info("Logging function 2 details to to file2")
return bar
This logs fine (separately) when I run the individual python files - file1.py / file2.py
But the logs append to just one file when I run the API.
What am I doing wrong with the logging? How can I fix it?
Add this to logger_setup.py
import logging
from pathlib import Path
formatter = logging.Formatter('%(asctime)s - %(levelname)s - [%(filename)s:%(lineno)d] - %(message)s')
def setup_logger( name, log_file, level=logging.DEBUG):
my_file = Path(log_file)
# print("check the if condition for the file")
# print(my_file.is_file())
if my_file.is_file():
#print(logging.getLogger(name).hasHandlers())
# if logging.getLogger(name).hasHandlers():
if len(logging.getLogger(name).handlers)>0:
return logging.getLogger(name)
else:
handler = logging.FileHandler(log_file, mode='a')
handler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.setLevel(level)
logger.addHandler(handler)
logger.propagate = False
return logger
else:
handler = logging.FileHandler(log_file, mode='a')
handler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.setLevel(level)
logger.addHandler(handler)
logger.propagate = False
return logger
Inside the file1 and 2 you can use something like this
import logging
import logger_setup
def fun1():
log_location = 'logs'
logger = logger_setup.setup_logger(__name__,log_location+__name__+'.log')
logger.info("Logging function 1 details to to file1")
return "1"
PyQt4/QProcess issues with Nuke v9...
I am trying to utilize a QProcess to run renders in Nuke at my workplace. The reason why I want to use a QProcess is because I've setup this Task Manager with the help of the community at stackoverflow, which takes a list of commands and sequentially runs it one by one, and also allows me to display an output. You can view the question I posted here:
How to update UI with output from QProcess loop without the UI freezing?
Now I am trying to basically run Nuke renders through this "Task Manager". But every time I do it just gives me an error that the QProcess is destroyed while still running. I mean I tested this with subprocess and that worked totally fine. So i am not sure why the renders are not working through QProcess.
So to do more testing I just wrote a simplified version at home. The first issue I ran into though is that apparently PyQt4 couldn't be found from Nuke's python.exe. Even though I have PyQt4 for my main Python version. However apparently there is a compatibility issue with my installed PyQt4 since my main Python version is 2.7.12, while my Nuke's python version is 2.7.3. So i thought "fine then i'll just directly install PyQt4 inside my Nuke directory". So i grabbed this link and installed this PyQt version into my Nuke directory:
http://sourceforge.net/projects/pyqt/files/PyQt4/PyQt-4.10.3/PyQt4-4.10.3-gpl-Py2.7-Qt4.8.5-x64.exe
So i run my little test and seems to be doing the same thing as it does in my workplace, where the QProcess just gets destoryed. So i thought maybe adding "waitForFinished()" would maybe do something different, but then it gives me this error that reads:
The procedure entry point ??4QString##QEAAAEAV0#$$QEAV0##Z could not be located in the dynamic link library QtCore4.dll
And gives me this error as well:
ImportError: Failed to load C:\Program Files\Nuke9.0v8\nuke-9.0.8.dll
Now at this point I can't really do any more testing at home, and my studio is closed for the holidays. So i have two questions i'd like to ask:
1) What is this error I am seeing about "procedure entry point"? It only happens when i try to call something in a QProcess instance.
2) Why is my QProcess being destroyed before the render is finished?? How come this doesn't happen with subprocess? How can I submit a Nuke render job while acheiving the same results as subprocess?
Here is my test code:
import os
import sys
import subprocess
import PyQt4
from PyQt4 import QtCore
class Task:
def __init__(self, program, args=None):
self._program = program
self._args = args or []
#property
def program(self):
return self._program
#property
def args(self):
return self._args
class SequentialManager(QtCore.QObject):
started = QtCore.pyqtSignal()
finished = QtCore.pyqtSignal()
progressChanged = QtCore.pyqtSignal(int)
dataChanged = QtCore.pyqtSignal(str)
#^ this is how we can send a signal and can declare what type
# of information we want to pass with this signal
def __init__(self, parent=None):
# super(SequentialManager, self).__init__(parent)
# QtCore.QObject.__init__(self,parent)
QtCore.QObject.__init__(self)
self._progress = 0
self._tasks = []
self._process = QtCore.QProcess(self)
self._process.setProcessChannelMode(QtCore.QProcess.MergedChannels)
self._process.finished.connect(self._on_finished)
self._process.readyReadStandardOutput.connect(self._on_readyReadStandardOutput)
def execute(self, tasks):
self._tasks = iter(tasks)
#this 'iter()' method creates an iterator object
self.started.emit()
self._progress = 0
self.progressChanged.emit(self._progress)
self._execute_next()
def _execute_next(self):
try:
task = next(self._tasks)
except StopIteration:
return False
else:
print 'starting %s' % task.args
self._process.start(task.program, task.args)
return True
def _on_finished(self):
self._process_task()
if not self._execute_next():
self.finished.emit()
def _on_readyReadStandardOutput(self):
output = self._process.readAllStandardOutput()
result = output.data().decode()
self.dataChanged.emit(result)
def _process_task(self):
self._progress += 1
self.progressChanged.emit(self._progress)
class outputLog(QtCore.QObject):
def __init__(self, parent=None, parentWindow=None):
QtCore.QObject.__init__(self)
self._manager = SequentialManager(self)
def startProcess(self, tasks):
# self._manager.progressChanged.connect(self._progressbar.setValue)
self._manager.dataChanged.connect(self.on_dataChanged)
self._manager.started.connect(self.on_started)
self._manager.finished.connect(self.on_finished)
self._manager.execute(tasks)
#QtCore.pyqtSlot()
def on_started(self):
print 'process started'
#QtCore.pyqtSlot()
def on_finished(self):
print 'finished'
#QtCore.pyqtSlot(str)
def on_dataChanged(self, message):
if message:
print message
def nukeTestRender():
import nuke
nuke.scriptOpen('D:/PC6/Documents/nukeTestRender/nukeTestRender.nk')
writeNode = None
for node in nuke.allNodes():
if node.Class() == 'Write':
writeNode = node
framesList = [1, 20, 30, 40]
fr = nuke.FrameRanges(framesList)
# nuke.execute(writeNode, fr)
for x in range(20):
print 'random'
def run():
nukePythonEXE = 'C:/Program Files/Nuke9.0v8/python.exe'
thisFile = os.path.dirname(os.path.abspath("__file__"))
print thisFile
cmd = '"%s" %s renderCheck' %(nukePythonEXE, __file__)
cmd2 = [__file__, 'renderCheck']
cmdList = [Task(nukePythonEXE, cmd2)]
# subprocess.call(cmd, stdin=None, stdout=None, stderr=None, shell=False)
taskManager = outputLog()
taskManager.startProcess(cmdList)
taskManager._manager._process.waitForFinished()
if __name__ == "__main__":
print sys.argv
if len(sys.argv) == 1:
run()
elif len(sys.argv) == 2:
nukeTestRender()
I have managed to come up with an answer, so I will write in the details below:
Basically, I was getting the error with the installed PyQt4 because it was not compatible with my version of Nuke, so it is apparently more recommended to use PySide included in Nuke. However Nuke's Python executable cannot natively find PySide, a few paths needed to be added to the sys.path:
paths = ['C:\\Program Files\\Nuke9.0v8\\lib\\site-packages,
C:\\Users\\Desktop02\\.nuke',
'C:\\Program Files\\Nuke9.0v8\\plugins',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages\\setuptools-0.6c11-py2.6.egg',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages\\protobuf-2.5.0-py2.6.egg',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages',
'C:\\Program Files\\Nuke9.0v8\\plugins\\modules',
'C:\\Program Files\\Nuke9.0v8\\configs\\Python\\site-packages',
'C:\\Users\\Desktop02\\.nuke\\Python\\site-packages']
for path in paths:
sys.path.append(path)
I found the missing paths by opening up both Nuke in GUI mode and the Python executable, and comparing both sys.path to see what the Python executable was lacking.
And to answer my own main question: if I call waitForFinished(-1) on the QProcess instance, this ignores the default 30sec limit of this function... Answer came from this thread:
QProcess and shell : Destroyed while process is still running
So here is my resulting working code:
import os
import sys
import subprocess
sysArgs = sys.argv
try:
import nuke
from PySide import QtCore
except ImportError:
raise ImportError('nuke not currently importable')
class Task:
def __init__(self, program, args=None):
self._program = program
self._args = args or []
#property
def program(self):
return self._program
#property
def args(self):
return self._args
class SequentialManager(QtCore.QObject):
started = QtCore.Signal()
finished = QtCore.Signal()
progressChanged = QtCore.Signal(int)
dataChanged = QtCore.Signal(str)
#^ this is how we can send a signal and can declare what type
# of information we want to pass with this signal
def __init__(self, parent=None):
# super(SequentialManager, self).__init__(parent)
# QtCore.QObject.__init__(self,parent)
QtCore.QObject.__init__(self)
self._progress = 0
self._tasks = []
self._process = QtCore.QProcess(self)
self._process.setProcessChannelMode(QtCore.QProcess.MergedChannels)
self._process.finished.connect(self._on_finished)
self._process.readyReadStandardOutput.connect(self._on_readyReadStandardOutput)
def execute(self, tasks):
self._tasks = iter(tasks)
#this 'iter()' method creates an iterator object
self.started.emit()
self._progress = 0
self.progressChanged.emit(self._progress)
self._execute_next()
def _execute_next(self):
try:
task = next(self._tasks)
except StopIteration:
return False
else:
print 'starting %s' % task.args
self._process.start(task.program, task.args)
return True
def _on_finished(self):
self._process_task()
if not self._execute_next():
self.finished.emit()
def _on_readyReadStandardOutput(self):
output = self._process.readAllStandardOutput()
result = output.data().decode()
self.dataChanged.emit(result)
def _process_task(self):
self._progress += 1
self.progressChanged.emit(self._progress)
class outputLog(QtCore.QObject):
def __init__(self, parent=None, parentWindow=None):
QtCore.QObject.__init__(self)
self._manager = SequentialManager(self)
def startProcess(self, tasks):
# self._manager.progressChanged.connect(self._progressbar.setValue)
self._manager.dataChanged.connect(self.on_dataChanged)
self._manager.started.connect(self.on_started)
self._manager.finished.connect(self.on_finished)
self._manager.execute(tasks)
#QtCore.Slot()
def on_started(self):
print 'process started'
#QtCore.Slot()
def on_finished(self):
print 'finished'
#QtCore.Slot(str)
def on_dataChanged(self, message):
if message:
print message
def nukeTestRender():
import nuke
nuke.scriptOpen('D:/PC6/Documents/nukeTestRender/nukeTestRender.nk')
writeNode = None
for node in nuke.allNodes():
if node.Class() == 'Write':
writeNode = node
framesList = [1, 20, 30, 40]
fr = nuke.FrameRanges(framesList)
nuke.execute(writeNode, fr)
# nuke.execute(writeNode, start=1, end=285)
for x in range(20):
print 'random'
def run():
nukePythonEXE = 'C:/Program Files/Nuke9.0v8/python.exe'
thisFile = os.path.dirname(os.path.abspath("__file__"))
print thisFile
cmd = '"%s" %s renderCheck' %(nukePythonEXE, sysArgs[0])
cmd2 = [sysArgs[0], 'renderCheck']
cmdList = [Task(nukePythonEXE, cmd2)]
# subprocess.call(cmd, stdin=None, stdout=None, stderr=None, shell=False)
taskManager = outputLog()
taskManager.startProcess(cmdList)
taskManager._manager._process.waitForFinished(-1)
if __name__ == "__main__":
print sys.argv
if len(sysArgs) == 1:
run()
elif len(sysArgs) == 2:
nukeTestRender()
For whatever reason, PySide refuses to load for me without the nuke module imported first. and also theres a known error when importing nuke it deletes all sys.argv arguments so thats gotta be stored somewhere first before the nuke import...
import logging.handlers
LOG_FILENAME = 'logger.log'
logging.basicConfig(filename=LOG_FILENAME,
level=logging.DEBUG,
)
logging.info('In Log File...')
f = open(LOG_FILENAME, 'rt')
try:
body = f.read()
finally:
f.close()
print 'FILE:'
print body
def syslog (debug,info,warning,error,critical = "LEVELS"):
str = "Debug: "+ debug+" Info: "+ info+" Waring: "+ warning+" Error: "+ error+" Critical: " +critical
return str
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
handler = logging.handlers.SysLogHandler(address = ('localhost',514), facility=19)
my_logger.addHandler(handler)
my_logger.debug('this is debug')
my_logger.critical('this is critical')
my_logger.info('this is info')
my_logger.warn('this is critical')
You could establish a central log server and send all log messages from all machines to it via HTTP (or sockets).
logging module has many features, including so-called handlers. Simply put, a logging handler is a class whose emit method takes the log record to input and does something with it, for example, writing the record to a given file is default behavior (FileHandler). There is a lot of handlers already included in logging module which could be useful. In your case, you could use HTTPHandler. SocketHandler is described on the same page, you can use it if you prefer sockets.
Or you could write your own custom handler
class CustomHandler(logging.StreamHandler):
def emit(self, record):
msg = self.format(record)
# msg is the record string to, em, record (sorry for much of a muchness)
# Do whatever you want with it
HTTPHandler and your custom handler are added to a logger just as in your code
handler = logging.handlers.HTTPHandler(...)
or
handler = CustomHandler(...)
my_logger.addHandler(handler)
import logging
import socket
import sys
from logging.handlers import SysLogHandler
#class ContextFilter(object):
#pass
class ContextFilter(logging.Filter):
hostname = socket.gethostname()
def filter(self, record):
record.hostname = ContextFilter.hostname
return True
try:
syslog = SysLogHandler(address=('110.110.112.143', 514))
# except (IOError,NameError) as e:
# print str(e)
except Exception as e:
logging.exception("message contextfilter")
finally:
pass
#try:
#handler = logging.handlers.SysLogHandler(address=('localhost', 514), facility=19)
# except (IOError,NameError) as e:
# print str(e)
#except Exception as e:
#logging.exception("message contextfilter")
#finally:
#pass
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
f = ContextFilter()
logger.addFilter(f)
# syslog = SysLogHandler(address=('110.110.112.202', 514))
# handler = logging.handlers.SysLogHandler(address = '/dev/log')
# handler = logging.handlers.SysLogHandler(address=('localhost', 514), facility=19)
# logger.addHandler(handler)
# syslog = SysLogHandler(address=('192.168.56.102', 514))
formatter = logging.Formatter('%(hostname)s syslogs: [%(levelname)s] %(message)s', datefmt='%b %d %H:%M:%S')
# syslog.setFormatter(formatter)
# logger.addHandler(syslog)
# NullHandler handler excepation
logging.getLogger('ContextFilter').addHandler(logging.NullHandler())
logging.debug("In Information's Log File... ")
logging.info("In Debug's Log File... ")
logging.warning("In Warning's Log File... ")
logging.error("In Critical's Log File... ")
logging.critical("In Error's Log File...")
print "These is loging INFO .... "
I am trying to grab the serial number from a Banner. I have successfully done it by storing the banner content in a file, but now I would like to try without storing it in a file. Below is the snippet of code:
import argparse
import logging
import paramiko
def grab_banner(ip_address, port):
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(ip_address, port=port, username='username', password='bad-password-on-purpose')
except:
return client._transport.get_banner()
def GetSerialNo(ip_address,Banner):
fp1=open("Baner","w")
fp1.write(Banner)
fp1.close()
fp2=open("Baner","r")
for line in fp2:
if re.search("System S/N", line):
Serial = line.split()
return Serial[2]
fp2.close()
if __name__ == '__main__':
logger = logging.getLogger(__name__)
parser = argparse.ArgumentParser(description='This is a demo script')
parser.add_argument('-s','--ipsetups', help='IP Address')
args = parser.parse_args()
Setup_File=args.ipsetups
fp=open(Setup_File,"r")
for line in fp.readlines():
IP = line.strip()
logger.info("================================ WORKING on %s ===================================",IP)
Banner = grab_banner(IP, 22)
serial = GetSerialNo(IP, Banner)
logger.info("Serial Number is -> %s",serial)
fp.close()
The above code is working fine, but now I am trying to do it by storing it in some variable and then grabbing the serial number. But I'm unable to do so. Below is what I am trying to do:
def get_ip(Setup_File):
IPS = []
with open(Setup_File, 'r') as f:
for line in f:
IPS = line.split()
return IPS
def grab_banner(ip_address, port):
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
client.connect(ip_address, port=port, username='username', password='bad-password-on-purpose')
except:
return client._transport.get_banner()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='This is a demo script by Mangesh Pardhi.')
parser.add_argument('-s','--ipsetups', help='PD-Setup IP Address')
args = parser.parse_args()
Setup_File=args.ipsetups
print Setup_File
IPS = get_ip(Setup_File)
for IP in IPS:
logger.info("================================ WORKING on %s ===================================",IP)
Banner = grab_banner(IP, 22)
if "System S/N" in Banner:
XXXXXXXXXXHow To procees XXXXXXXXXXx
serial = Serial[2]
logger.info("Serial Number is -> %s",serial)
You could just simplify GetSerialNo in your original code.
def GetSerialNo(ip_address, Banner):
for line in Banner.split('\n'):
if re.search("System S/N", line):
Serial = line.split()
return Serial[2]
(Surely you already know that you don't need the parameter ip_address there.)
I am having difficulties to trade several trading strategies written in Python.
I have established FIX connection via Quickfix but I only can send orders if the script of the strategy is inside the Quickfix connection script. Since I have several strategies I really have no idea how to send the order from a separate script. Can someone give me some solution?
import sys
import datetime
import time
import quickfix as fix
class Application(fix.Application):
orderID = 0
execID = 0
def gen_ord_id(self):
global orderID
orderID+=1
return orderID
def onCreate(self, sessionID):
return
def onLogon(self, sessionID):
self.sessionID = sessionID
print ("Successful Logon to session '%s'." % sessionID.toString())
return
def onLogout(self, sessionID):
return
def toAdmin(self, message, sessionID):
username = fix.Username("username")
mypass = fix.Password("password")
mycompid = fix.TargetSubID("targetsubid")
message.setField(username)
message.setField(mypass)
message.setField(mycompid)
def fromAdmin(self, message, sessionID):
TradeID = fix.TradingSessionID
message.getField(TradeID)
return
def toApp(self, sessionID, message):
print "Sent the following message: %s" % message.toString()
return
def fromApp(self, message, sessionID):
print "Received the following message: %s" % message.toString()
return
def genOrderID(self):
self.orderID = self.orderID + 1
return `self.orderID`
def genExecID(self):
self.execID = self.execID + 1
return `self.execID`
def put_order(self, sessionID, myinstrument, myquantity):
self.myinstrument = myinstrument
self.myquantity = myquantity
print("Creating the following order: ")
today = datetime.datetime.now()
nextID = today.strftime("%m%d%Y%H%M%S%f")
trade = fix.Message()
trade.getHeader().setField(fix.StringField(8, "FIX.4.4"))
trade.getHeader().setField(fix.MsgType(fix.MsgType_NewOrderSingle))
trade.setField(fix.ClOrdID(nextID)) #11=Unique order
trade.getHeader().setField(fix.Account("account"))
trade.getHeader().setField(fix.TargetSubID("targetsubid"))
trade.setField(fix.Symbol(myinstrument)) #55=SMBL ?
trade.setField(fix.TransactTime())
trade.setField(fix.CharField(54, fix.Side_BUY))
trade.setField(fix.OrdType(fix.OrdType_MARKET)) # 40=2 Limit order
trade.setField(fix.OrderQty(myquantity)) # 38=100
print trade.toString()
fix.Session.sendToTarget(trade, self.sessionID)
try:
file = sys.argv[1]
settings = fix.SessionSettings(file)
application = Application()
storeFactory = fix.FileStoreFactory(settings)
logFactory = fix.ScreenLogFactory(settings)
initiator = fix.SocketInitiator(application, storeFactory, settings, logFactory)
initiator.start()
while 1:
time.sleep(1)
if input == '1':
print "Putin Order"
application.put_order(fix.Application)
if input == '2':
sys.exit(0)
if input == 'd':
import pdb
pdb.set_trace()
else:
print "Valid input is 1 for order, 2 for exit"
except (fix.ConfigError, fix.RuntimeError) as e:
print e
This is my initator app. My question is can I update the following values from another python script:
trade.setField(fix.Symbol(myinstrument))
trade.setField(fix.OrderQty(myquantity))
So I want to change myinstrument and myquantity from another python script and force the initiator to execute the following command application.put_order(fix.Application) with the new values. My question is is this possible at all?
Sounds like you need an internal messaging layer that QuickFIX subscribes to, and that your separate Python scripts publish orders to. It's about workflow design. Try something like VertX as that can be setup using Python.