I want to spawn multiple processes using multiprocessing.Pool (python 2.7.13), and redirect stdout / stderr of each process to a file. The problem is it works for stdout, but not for stderr. Here's an example with a single process.
import sys
import multiprocessing as mp
def foo():
sys.stdout = open('a.out','w')
sys.stderr = open('a.err', 'w')
print("this must go to a.out.")
raise Exception('this must go to a.err.')
return True
def run():
pool = mp.Pool(4)
_retvals = []
_retvals.append( pool.apply_async(foo) )
retvals = [r.get(timeout=10.) for r in _retvals]
if __name__ == '__main__':
run()
Running python stderr.py in terminal (of macbook) produces a.out with correct message ("this must go to a.out"). But it produces empty a.err, and the error message appears in terminal window instead.
If I don't use multiprocessing.Pool and directly run it in the main thread, it produces correct messages on both files. This means replacing run() with the following snippet:
def run():
foo()
When using Pools, unhandled exceptions are handled by the the main process. You should either redirect stderr in main(), or wrap your functions like this:
def foo():
sys.stdout = open('x.out', 'a')
sys.stderr = open('x.err', 'a')
try:
print("this goes to x.out.")
print("this goes to x.err.", file=sys.stderr)
raise ValueError('this must go to a.err.')
except:
traceback.print_exc()
raise # optional
Related
PyQt4/QProcess issues with Nuke v9...
I am trying to utilize a QProcess to run renders in Nuke at my workplace. The reason why I want to use a QProcess is because I've setup this Task Manager with the help of the community at stackoverflow, which takes a list of commands and sequentially runs it one by one, and also allows me to display an output. You can view the question I posted here:
How to update UI with output from QProcess loop without the UI freezing?
Now I am trying to basically run Nuke renders through this "Task Manager". But every time I do it just gives me an error that the QProcess is destroyed while still running. I mean I tested this with subprocess and that worked totally fine. So i am not sure why the renders are not working through QProcess.
So to do more testing I just wrote a simplified version at home. The first issue I ran into though is that apparently PyQt4 couldn't be found from Nuke's python.exe. Even though I have PyQt4 for my main Python version. However apparently there is a compatibility issue with my installed PyQt4 since my main Python version is 2.7.12, while my Nuke's python version is 2.7.3. So i thought "fine then i'll just directly install PyQt4 inside my Nuke directory". So i grabbed this link and installed this PyQt version into my Nuke directory:
http://sourceforge.net/projects/pyqt/files/PyQt4/PyQt-4.10.3/PyQt4-4.10.3-gpl-Py2.7-Qt4.8.5-x64.exe
So i run my little test and seems to be doing the same thing as it does in my workplace, where the QProcess just gets destoryed. So i thought maybe adding "waitForFinished()" would maybe do something different, but then it gives me this error that reads:
The procedure entry point ??4QString##QEAAAEAV0#$$QEAV0##Z could not be located in the dynamic link library QtCore4.dll
And gives me this error as well:
ImportError: Failed to load C:\Program Files\Nuke9.0v8\nuke-9.0.8.dll
Now at this point I can't really do any more testing at home, and my studio is closed for the holidays. So i have two questions i'd like to ask:
1) What is this error I am seeing about "procedure entry point"? It only happens when i try to call something in a QProcess instance.
2) Why is my QProcess being destroyed before the render is finished?? How come this doesn't happen with subprocess? How can I submit a Nuke render job while acheiving the same results as subprocess?
Here is my test code:
import os
import sys
import subprocess
import PyQt4
from PyQt4 import QtCore
class Task:
def __init__(self, program, args=None):
self._program = program
self._args = args or []
#property
def program(self):
return self._program
#property
def args(self):
return self._args
class SequentialManager(QtCore.QObject):
started = QtCore.pyqtSignal()
finished = QtCore.pyqtSignal()
progressChanged = QtCore.pyqtSignal(int)
dataChanged = QtCore.pyqtSignal(str)
#^ this is how we can send a signal and can declare what type
# of information we want to pass with this signal
def __init__(self, parent=None):
# super(SequentialManager, self).__init__(parent)
# QtCore.QObject.__init__(self,parent)
QtCore.QObject.__init__(self)
self._progress = 0
self._tasks = []
self._process = QtCore.QProcess(self)
self._process.setProcessChannelMode(QtCore.QProcess.MergedChannels)
self._process.finished.connect(self._on_finished)
self._process.readyReadStandardOutput.connect(self._on_readyReadStandardOutput)
def execute(self, tasks):
self._tasks = iter(tasks)
#this 'iter()' method creates an iterator object
self.started.emit()
self._progress = 0
self.progressChanged.emit(self._progress)
self._execute_next()
def _execute_next(self):
try:
task = next(self._tasks)
except StopIteration:
return False
else:
print 'starting %s' % task.args
self._process.start(task.program, task.args)
return True
def _on_finished(self):
self._process_task()
if not self._execute_next():
self.finished.emit()
def _on_readyReadStandardOutput(self):
output = self._process.readAllStandardOutput()
result = output.data().decode()
self.dataChanged.emit(result)
def _process_task(self):
self._progress += 1
self.progressChanged.emit(self._progress)
class outputLog(QtCore.QObject):
def __init__(self, parent=None, parentWindow=None):
QtCore.QObject.__init__(self)
self._manager = SequentialManager(self)
def startProcess(self, tasks):
# self._manager.progressChanged.connect(self._progressbar.setValue)
self._manager.dataChanged.connect(self.on_dataChanged)
self._manager.started.connect(self.on_started)
self._manager.finished.connect(self.on_finished)
self._manager.execute(tasks)
#QtCore.pyqtSlot()
def on_started(self):
print 'process started'
#QtCore.pyqtSlot()
def on_finished(self):
print 'finished'
#QtCore.pyqtSlot(str)
def on_dataChanged(self, message):
if message:
print message
def nukeTestRender():
import nuke
nuke.scriptOpen('D:/PC6/Documents/nukeTestRender/nukeTestRender.nk')
writeNode = None
for node in nuke.allNodes():
if node.Class() == 'Write':
writeNode = node
framesList = [1, 20, 30, 40]
fr = nuke.FrameRanges(framesList)
# nuke.execute(writeNode, fr)
for x in range(20):
print 'random'
def run():
nukePythonEXE = 'C:/Program Files/Nuke9.0v8/python.exe'
thisFile = os.path.dirname(os.path.abspath("__file__"))
print thisFile
cmd = '"%s" %s renderCheck' %(nukePythonEXE, __file__)
cmd2 = [__file__, 'renderCheck']
cmdList = [Task(nukePythonEXE, cmd2)]
# subprocess.call(cmd, stdin=None, stdout=None, stderr=None, shell=False)
taskManager = outputLog()
taskManager.startProcess(cmdList)
taskManager._manager._process.waitForFinished()
if __name__ == "__main__":
print sys.argv
if len(sys.argv) == 1:
run()
elif len(sys.argv) == 2:
nukeTestRender()
I have managed to come up with an answer, so I will write in the details below:
Basically, I was getting the error with the installed PyQt4 because it was not compatible with my version of Nuke, so it is apparently more recommended to use PySide included in Nuke. However Nuke's Python executable cannot natively find PySide, a few paths needed to be added to the sys.path:
paths = ['C:\\Program Files\\Nuke9.0v8\\lib\\site-packages,
C:\\Users\\Desktop02\\.nuke',
'C:\\Program Files\\Nuke9.0v8\\plugins',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages\\setuptools-0.6c11-py2.6.egg',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages\\protobuf-2.5.0-py2.6.egg',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages',
'C:\\Program Files\\Nuke9.0v8\\plugins\\modules',
'C:\\Program Files\\Nuke9.0v8\\configs\\Python\\site-packages',
'C:\\Users\\Desktop02\\.nuke\\Python\\site-packages']
for path in paths:
sys.path.append(path)
I found the missing paths by opening up both Nuke in GUI mode and the Python executable, and comparing both sys.path to see what the Python executable was lacking.
And to answer my own main question: if I call waitForFinished(-1) on the QProcess instance, this ignores the default 30sec limit of this function... Answer came from this thread:
QProcess and shell : Destroyed while process is still running
So here is my resulting working code:
import os
import sys
import subprocess
sysArgs = sys.argv
try:
import nuke
from PySide import QtCore
except ImportError:
raise ImportError('nuke not currently importable')
class Task:
def __init__(self, program, args=None):
self._program = program
self._args = args or []
#property
def program(self):
return self._program
#property
def args(self):
return self._args
class SequentialManager(QtCore.QObject):
started = QtCore.Signal()
finished = QtCore.Signal()
progressChanged = QtCore.Signal(int)
dataChanged = QtCore.Signal(str)
#^ this is how we can send a signal and can declare what type
# of information we want to pass with this signal
def __init__(self, parent=None):
# super(SequentialManager, self).__init__(parent)
# QtCore.QObject.__init__(self,parent)
QtCore.QObject.__init__(self)
self._progress = 0
self._tasks = []
self._process = QtCore.QProcess(self)
self._process.setProcessChannelMode(QtCore.QProcess.MergedChannels)
self._process.finished.connect(self._on_finished)
self._process.readyReadStandardOutput.connect(self._on_readyReadStandardOutput)
def execute(self, tasks):
self._tasks = iter(tasks)
#this 'iter()' method creates an iterator object
self.started.emit()
self._progress = 0
self.progressChanged.emit(self._progress)
self._execute_next()
def _execute_next(self):
try:
task = next(self._tasks)
except StopIteration:
return False
else:
print 'starting %s' % task.args
self._process.start(task.program, task.args)
return True
def _on_finished(self):
self._process_task()
if not self._execute_next():
self.finished.emit()
def _on_readyReadStandardOutput(self):
output = self._process.readAllStandardOutput()
result = output.data().decode()
self.dataChanged.emit(result)
def _process_task(self):
self._progress += 1
self.progressChanged.emit(self._progress)
class outputLog(QtCore.QObject):
def __init__(self, parent=None, parentWindow=None):
QtCore.QObject.__init__(self)
self._manager = SequentialManager(self)
def startProcess(self, tasks):
# self._manager.progressChanged.connect(self._progressbar.setValue)
self._manager.dataChanged.connect(self.on_dataChanged)
self._manager.started.connect(self.on_started)
self._manager.finished.connect(self.on_finished)
self._manager.execute(tasks)
#QtCore.Slot()
def on_started(self):
print 'process started'
#QtCore.Slot()
def on_finished(self):
print 'finished'
#QtCore.Slot(str)
def on_dataChanged(self, message):
if message:
print message
def nukeTestRender():
import nuke
nuke.scriptOpen('D:/PC6/Documents/nukeTestRender/nukeTestRender.nk')
writeNode = None
for node in nuke.allNodes():
if node.Class() == 'Write':
writeNode = node
framesList = [1, 20, 30, 40]
fr = nuke.FrameRanges(framesList)
nuke.execute(writeNode, fr)
# nuke.execute(writeNode, start=1, end=285)
for x in range(20):
print 'random'
def run():
nukePythonEXE = 'C:/Program Files/Nuke9.0v8/python.exe'
thisFile = os.path.dirname(os.path.abspath("__file__"))
print thisFile
cmd = '"%s" %s renderCheck' %(nukePythonEXE, sysArgs[0])
cmd2 = [sysArgs[0], 'renderCheck']
cmdList = [Task(nukePythonEXE, cmd2)]
# subprocess.call(cmd, stdin=None, stdout=None, stderr=None, shell=False)
taskManager = outputLog()
taskManager.startProcess(cmdList)
taskManager._manager._process.waitForFinished(-1)
if __name__ == "__main__":
print sys.argv
if len(sysArgs) == 1:
run()
elif len(sysArgs) == 2:
nukeTestRender()
For whatever reason, PySide refuses to load for me without the nuke module imported first. and also theres a known error when importing nuke it deletes all sys.argv arguments so thats gotta be stored somewhere first before the nuke import...
Pycharm can run correctly my code, which print subprocess stdout one by one to qt widget (QTextBrower), but after pyinstaller to .exe, it will print all stdouts at once till the subprocess finished, that is not a expected result
I tried use flush() and stdout.close in the subprocess, still the same.
class NonBlockingStreamReader:
def __init__(self, stream):
self._s = stream
self._q = Queue()
def _populateQueue(stream, queue):
while True:
line = stream.readline()
if line:
queue.put(line)
#else:
#raise UnexpectedEndOfStream
self._t = Thread(target = _populateQueue, args = (self._s, self._q))
self._t.daemon = True
self._t.start() #start collecting lines from the stream
def readline(self, timeout=None):
try:
return self._q.get(block=timeout is not None, timeout=timeout)
except Empty:
return None
......
form = uic.loadUiType("data/GUI/GUI.ui")[0]
class Form(QtGui.QDialog, form):
def __init__(self, parent=None):
QtGui.QDialog.__init__(self, parent)
self.setupUi(self)
os.chdir("../../")
self.LogAnalyzeButton.clicked.connect(self.LogAnalyzePre)
......
def LogAnalyzePre(self):
self.Console.append("Analyzing log, please wait . . . . . . ." + "\n" )
arguments = 'python log.py %s'%(path)
self.proc = Popen(arguments, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
nbsr = NonBlockingStreamReader(self.proc.stdout)
while self.proc.poll() is None:
line = nbsr.readline(0.1)
print line
if line:
self.Console.insertPlainText(unicode(line, "utf-8"))
self.Console.moveCursor(QtGui.QTextCursor.End)
QtGui.QApplication.processEvents()
when run .exe, I can see the debug cmd window show that line's value is always None, and till the subprocess closed, the stdouts in queue are print at once
This has been proved it is a mistake that I did't put the log.py which had been added a flush() method into the same folder with .exe, so the flush() method can definitely solve this stdout output issue
I am trying to make use of Windows Hooks in order to intercept and block keystrokes while my application sends its own gui events.
I came up with the following listing:
import pythoncom
import pyHook
import threading
import time
def on_keyboard_event(event):
print 'MessageName:',event.MessageName
print 'Message:',event.Message
print 'Time:',event.Time
print 'Window:',event.Window
print 'WindowName:',event.WindowName
print 'Ascii:', event.Ascii, chr(event.Ascii)
print 'Key:', event.Key
print 'KeyID:', event.KeyID
print 'ScanCode:', event.ScanCode
print 'Extended:', event.Extended
print 'Injected:', event.Injected
print 'Alt', event.Alt
print 'Transition', event.Transition
print '---'
return False
class WindowsHooksWrapper:
def __init__(self):
self.started = False
self.thread = threading.Thread(target=self.thread_proc)
self.hook_manager = pyHook.HookManager()
def start(self):
if self.started:
self.stop()
# Register hook
self.hook_manager.KeyDown = on_keyboard_event
self.hook_manager.KeyUp = on_keyboard_event
self.hook_manager.HookKeyboard()
# Start the windows message pump
self.started = True
self.thread.start()
def stop(self):
if not self.started:
return
self.started = False
self.thread.join()
self.hook_manager.UnhookKeyboard()
def thread_proc(self):
print "Thread started"
while self.started:
pythoncom.PumpWaitingMessages()
print "Thread exiting..."
class WindowsHooksWrapper2:
def __init__(self):
self.started = False
self.thread = threading.Thread(target=self.thread_proc)
def start(self):
if self.started:
self.stop()
self.started = True
self.thread.start()
def stop(self):
if not self.started:
return
self.started = False
self.thread.join()
def thread_proc(self):
print "Thread started"
# Evidently, the hook must be registered on the same thread with the windows msg pump or
# it will not work and no indication of error is seen
# Also note that for exception safety, when the hook manager goes out of scope, it
# unregisters all outstanding hooks
hook_manager = pyHook.HookManager()
hook_manager.KeyDown = on_keyboard_event
hook_manager.KeyUp = on_keyboard_event
hook_manager.HookKeyboard()
while self.started:
pythoncom.PumpWaitingMessages()
print "Thread exiting..."
self.hook_manager.UnhookKeyboard()
def main():
# hook_manager = pyHook.HookManager()
# hook_manager.KeyDown = on_keyboard_event
# hook_manager.KeyUp = on_keyboard_event
# hook_manager.HookKeyboard()
# pythoncom.PumpMessages()
hook_wrapper = WindowsHooksWrapper2()
hook_wrapper.start()
time.sleep(30)
hook_wrapper.stop()
if __name__ == "__main__":
main()
The commented out section in main was from the pyhook wiki tutorial and it works fine.
I then tried to integrate that into a class, which is the 'WindowsHooksWrapper' class. If I used that class, it does not work and keyboard messages go through to their intended target.
On a hunch, I then tried 'WindowsHooksWrapper2', where I moved the registration of the hook to the same thread with the message pump. It now works.
Is my hunch correct that it is a requirement for the registration to be on the same thread as the pump? If so, why?
Note that I have a feeling this is a requirement of the windows 32 API rather than python or the pyhook library itself, because I did it in C++ and had the same result using 'SetWindowsHook' directly.
You've created a thread-Scope hook.
These hook events are associated either with a specific thread or with all threads in the same desktop as the calling thread.
pythoncom.PumpWaitingMessages() in Python and GetMessage/PeekMessage in Win32 are the methods that get message from that "specific thread or all threads in the same desktop as the calling thread."
To create a global hook, in order for your keyboard hook to be accessible from all processes, it has to be placed in a DLL which will then be loaded into each process' address space. See this Answer for details of how to make a global keyboard hook.
my request is close main thread after more than 2 min.i called serve_forever() ,but don't know how to close the main thread.my codes:
import SocketServer,socket
import threading,time,re
last_request_time = 0
class ThreadSocketServer(SocketServer.ThreadingMixIn,SocketServer.TCPServer):pass
class RequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
global last_request_time
last_request_time = time.time()
# print last_request_time
intact_data = []
while True:
data = str(self.request.recv(1024))
intact_data.append(data)
# if response[-16:-1].find('clientcomplete'):
if re.search('clientcomplete', data):
print 'server recv complete'
self.request.send('Finished')
break
self.request.close()
if __name__ == '__main__':
server = ThreadSocketServer(('192.168.3.33',12345),RequestHandler)
# server.handle_request()
server.serve_forever()
i tried server.handle_request(),but it can't seem to support multi threading.
I'm trying to create a child process that can take input through raw_input() or input(), but I'm getting an end of liner error EOFError: EOF when asking for input.
I'm doing this to experiment with multiprocessing in python, and I remember this easily working in C. Is there a workaround without using pipes or queues from the main process to it's child ? I'd really like the child to deal with user input.
def child():
print 'test'
message = raw_input() #this is where this process fails
print message
def main():
p = Process(target = child)
p.start()
p.join()
if __name__ == '__main__':
main()
I wrote some test code that hopefully shows what I'm trying to achieve.
My answer is taken from here: Is there any way to pass 'stdin' as an argument to another process in python?
I have modified your example and it seems to work:
from multiprocessing.process import Process
import sys
import os
def child(newstdin):
sys.stdin = newstdin
print 'test'
message = raw_input() #this is where this process doesn't fail anymore
print message
def main():
newstdin = os.fdopen(os.dup(sys.stdin.fileno()))
p = Process(target = child, args=(newstdin,))
p.start()
p.join()
if __name__ == '__main__':
main()