How to stream output in realtime in Twisted[autobahn] websocket server? - python-2.7
I want to execute a C program using subprocess.Popen() and stream it's output in realtime and send it to the client. However, the output is buffered and is sent together at the end of execution (Blocking nature). How can I recieve the output in realtime and then send it immediatly in Twisted Autobahn.
def onConnect(self, request):
try:
self.cont_name = ''.join(random.choice(string.lowercase) for i in range(5))
self.file_name = self.cont_name
print("Connecting...")
except Exception:
print("Failed"+str(Exception))
def onOpen(self):
try:
print("open")
except Exception:
print("Couldn't create container")
def onMessage(self, payload,isBinary=False):
cmd = "docker exec "+self.cont_name+" /tmp/./"+self.file_name
a = subprocess.Popen([cmd], shell=True, stdout=subprocess.PIPE, bufsize=1)
for line in iter(a.stdout.readline, b''):
line = line.encode('utf8')
self.sendMessage(line)
def onClose(self, wasClean, code, reason):
try:
print("Closed container...")
except Exception:
print(str(Exception))
When the docker command is executed using subprocess,the entire output of the c code is returned at once rather than as it happens. For ex:
#include <stdio.h>
#include <unistd.h>
int main(){
int i=0;
for(i=0;i<5;i++){
fflush(stdout);
printf("Rounded\n");
sleep(3);
}
}
After running this in the container,the program should return 'Rounded' after 3 secs to client. However, it ends up sending all the 'Rounded' at the end of execution.
The misbehavior comes from the loop in this method:
def onMessage(self, payload,isBinary=False):
cmd = "docker exec "+self.cont_name+" /tmp/./"+self.file_name
a = subprocess.Popen([cmd], shell=True, stdout=subprocess.PIPE, bufsize=1)
for line in iter(a.stdout.readline, b''):
line = line.encode('utf8')
self.sendMessage(line)
Twisted is a cooperative multitasking system. By default, everything runs in a single thread ("the reactor thread"). That means all code has to periodically (and usually quickly) give up control so other code (application code or Twisted-implementation code) gets a chance to run. The loop in this function reads from the child process and tries to send the data using an Autobahn API - over and over again, never giving up control.
Blocking reads from the Popen object may also cause problems. You won't know how long the read will block and so you won't know how long you'll prevent other code from running in the reactor thread. You could either move your Popen reads to a new thread where they won't block the reactor thread:
def onMessage(self, payload,isBinary=False):
cmd = "docker exec "+self.cont_name+" /tmp/./"+self.file_name
popen_in_thread(
lambda line: reactor.callFromThread(
lambda: self.sendMessage(line.encode("utf-8"))
),
[cmd], shell=True, stdout=subprocess.PIPE, bufsize=1
)
def popen_in_thread(callback, *args, **kwargs):
def threaded():
a = subprocess.Popen(*args, **kwargs)
for line in iter(a.stdout.readline, b''):
callback(line)
reactor.callInThread(threaded)
Or, better, use Twisted's own process support:
def onMessage(self, payload,isBinary=False):
class ProcessLinesToMessages(ProcessProtocol):
def outReceived(self, output):
buf = self.buf + output
lines = buf.splitlines()
self.buf = lines.pop()
for line in lines:
self.sendMessage(line.encode("utf-8"))
while True:
line, self.buf = self.buf.splitline
reactor.spawnProcess(
ProcessLinesToMessages(),
"docker",
[
"docker",
"exec",
self.cont_name,
"/tmp/./ + self.file_name,
],
)
(neither version tested, hopefully the idea is clear though)
Related
Check health of Docker using Python Script - Python 2.7 [duplicate]
Here's the Python code to run an arbitrary command returning its stdout data, or raise an exception on non-zero exit codes: proc = subprocess.Popen( cmd, stderr=subprocess.STDOUT, # Merge stdout and stderr stdout=subprocess.PIPE, shell=True) communicate is used to wait for the process to exit: stdoutdata, stderrdata = proc.communicate() The subprocess module does not support timeout--ability to kill a process running for more than X number of seconds--therefore, communicate may take forever to run. What is the simplest way to implement timeouts in a Python program meant to run on Windows and Linux?
In Python 3.3+: from subprocess import STDOUT, check_output output = check_output(cmd, stderr=STDOUT, timeout=seconds) output is a byte string that contains command's merged stdout, stderr data. check_output raises CalledProcessError on non-zero exit status as specified in the question's text unlike proc.communicate() method. I've removed shell=True because it is often used unnecessarily. You can always add it back if cmd indeed requires it. If you add shell=True i.e., if the child process spawns its own descendants; check_output() can return much later than the timeout indicates, see Subprocess timeout failure. The timeout feature is available on Python 2.x via the subprocess32 backport of the 3.2+ subprocess module.
I don't know much about the low level details; but, given that in python 2.6 the API offers the ability to wait for threads and terminate processes, what about running the process in a separate thread? import subprocess, threading class Command(object): def __init__(self, cmd): self.cmd = cmd self.process = None def run(self, timeout): def target(): print 'Thread started' self.process = subprocess.Popen(self.cmd, shell=True) self.process.communicate() print 'Thread finished' thread = threading.Thread(target=target) thread.start() thread.join(timeout) if thread.is_alive(): print 'Terminating process' self.process.terminate() thread.join() print self.process.returncode command = Command("echo 'Process started'; sleep 2; echo 'Process finished'") command.run(timeout=3) command.run(timeout=1) The output of this snippet in my machine is: Thread started Process started Process finished Thread finished 0 Thread started Process started Terminating process Thread finished -15 where it can be seen that, in the first execution, the process finished correctly (return code 0), while the in the second one the process was terminated (return code -15). I haven't tested in windows; but, aside from updating the example command, I think it should work since I haven't found in the documentation anything that says that thread.join or process.terminate is not supported.
jcollado's answer can be simplified using the threading.Timer class: import shlex from subprocess import Popen, PIPE from threading import Timer def run(cmd, timeout_sec): proc = Popen(shlex.split(cmd), stdout=PIPE, stderr=PIPE) timer = Timer(timeout_sec, proc.kill) try: timer.start() stdout, stderr = proc.communicate() finally: timer.cancel() # Examples: both take 1 second run("sleep 1", 5) # process ends normally at 1 second run("sleep 5", 1) # timeout happens at 1 second
If you're on Unix, import signal ... class Alarm(Exception): pass def alarm_handler(signum, frame): raise Alarm signal.signal(signal.SIGALRM, alarm_handler) signal.alarm(5*60) # 5 minutes try: stdoutdata, stderrdata = proc.communicate() signal.alarm(0) # reset the alarm except Alarm: print "Oops, taking too long!" # whatever else
Here is Alex Martelli's solution as a module with proper process killing. The other approaches do not work because they do not use proc.communicate(). So if you have a process that produces lots of output, it will fill its output buffer and then block until you read something from it. from os import kill from signal import alarm, signal, SIGALRM, SIGKILL from subprocess import PIPE, Popen def run(args, cwd = None, shell = False, kill_tree = True, timeout = -1, env = None): ''' Run a command with a timeout after which it will be forcibly killed. ''' class Alarm(Exception): pass def alarm_handler(signum, frame): raise Alarm p = Popen(args, shell = shell, cwd = cwd, stdout = PIPE, stderr = PIPE, env = env) if timeout != -1: signal(SIGALRM, alarm_handler) alarm(timeout) try: stdout, stderr = p.communicate() if timeout != -1: alarm(0) except Alarm: pids = [p.pid] if kill_tree: pids.extend(get_process_children(p.pid)) for pid in pids: # process might have died before getting to this line # so wrap to avoid OSError: no such process try: kill(pid, SIGKILL) except OSError: pass return -9, '', '' return p.returncode, stdout, stderr def get_process_children(pid): p = Popen('ps --no-headers -o pid --ppid %d' % pid, shell = True, stdout = PIPE, stderr = PIPE) stdout, stderr = p.communicate() return [int(p) for p in stdout.split()] if __name__ == '__main__': print run('find /', shell = True, timeout = 3) print run('find', shell = True)
Since Python 3.5, there's a new subprocess.run universal command (that is meant to replace check_call, check_output ...) and which has the timeout= parameter as well. subprocess.run(args, *, stdin=None, input=None, stdout=None, stderr=None, shell=False, cwd=None, timeout=None, check=False, encoding=None, errors=None) Run the command described by args. Wait for command to complete, then return a CompletedProcess instance. It raises a subprocess.TimeoutExpired exception when the timeout expires.
timeout is now supported by call() and communicate() in the subprocess module (as of Python3.3): import subprocess subprocess.call("command", timeout=20, shell=True) This will call the command and raise the exception subprocess.TimeoutExpired if the command doesn't finish after 20 seconds. You can then handle the exception to continue your code, something like: try: subprocess.call("command", timeout=20, shell=True) except subprocess.TimeoutExpired: # insert code here Hope this helps.
surprised nobody mentioned using timeout timeout 5 ping -c 3 somehost This won't for work for every use case obviously, but if your dealing with a simple script, this is hard to beat. Also available as gtimeout in coreutils via homebrew for mac users.
I've modified sussudio answer. Now function returns: (returncode, stdout, stderr, timeout) - stdout and stderr is decoded to utf-8 string def kill_proc(proc, timeout): timeout["value"] = True proc.kill() def run(cmd, timeout_sec): proc = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE, stderr=subprocess.PIPE) timeout = {"value": False} timer = Timer(timeout_sec, kill_proc, [proc, timeout]) timer.start() stdout, stderr = proc.communicate() timer.cancel() return proc.returncode, stdout.decode("utf-8"), stderr.decode("utf-8"), timeout["value"]
Another option is to write to a temporary file to prevent the stdout blocking instead of needing to poll with communicate(). This worked for me where the other answers did not; for example on windows. outFile = tempfile.SpooledTemporaryFile() errFile = tempfile.SpooledTemporaryFile() proc = subprocess.Popen(args, stderr=errFile, stdout=outFile, universal_newlines=False) wait_remaining_sec = timeout while proc.poll() is None and wait_remaining_sec > 0: time.sleep(1) wait_remaining_sec -= 1 if wait_remaining_sec <= 0: killProc(proc.pid) raise ProcessIncompleteError(proc, timeout) # read temp streams from start outFile.seek(0); errFile.seek(0); out = outFile.read() err = errFile.read() outFile.close() errFile.close()
Prepending the Linux command timeout isn't a bad workaround and it worked for me. cmd = "timeout 20 "+ cmd subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE) (output, err) = p.communicate()
I added the solution with threading from jcollado to my Python module easyprocess. Install: pip install easyprocess Example: from easyprocess import Proc # shell is not supported! stdout=Proc('ping localhost').call(timeout=1.5).stdout print stdout
Here is my solution, I was using Thread and Event: import subprocess from threading import Thread, Event def kill_on_timeout(done, timeout, proc): if not done.wait(timeout): proc.kill() def exec_command(command, timeout): done = Event() proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) watcher = Thread(target=kill_on_timeout, args=(done, timeout, proc)) watcher.daemon = True watcher.start() data, stderr = proc.communicate() done.set() return data, stderr, proc.returncode In action: In [2]: exec_command(['sleep', '10'], 5) Out[2]: ('', '', -9) In [3]: exec_command(['sleep', '10'], 11) Out[3]: ('', '', 0)
The solution I use is to prefix the shell command with timelimit. If the comand takes too long, timelimit will stop it and Popen will have a returncode set by timelimit. If it is > 128, it means timelimit killed the process. See also python subprocess with timeout and large output (>64K)
if you are using python 2, give it a try import subprocess32 try: output = subprocess32.check_output(command, shell=True, timeout=3) except subprocess32.TimeoutExpired as e: print e
I've implemented what I could gather from a few of these. This works in Windows, and since this is a community wiki, I figure I would share my code as well: class Command(threading.Thread): def __init__(self, cmd, outFile, errFile, timeout): threading.Thread.__init__(self) self.cmd = cmd self.process = None self.outFile = outFile self.errFile = errFile self.timed_out = False self.timeout = timeout def run(self): self.process = subprocess.Popen(self.cmd, stdout = self.outFile, \ stderr = self.errFile) while (self.process.poll() is None and self.timeout > 0): time.sleep(1) self.timeout -= 1 if not self.timeout > 0: self.process.terminate() self.timed_out = True else: self.timed_out = False Then from another class or file: outFile = tempfile.SpooledTemporaryFile() errFile = tempfile.SpooledTemporaryFile() executor = command.Command(c, outFile, errFile, timeout) executor.daemon = True executor.start() executor.join() if executor.timed_out: out = 'timed out' else: outFile.seek(0) errFile.seek(0) out = outFile.read() err = errFile.read() outFile.close() errFile.close()
Once you understand full process running machinery in *unix, you will easily find simplier solution: Consider this simple example how to make timeoutable communicate() meth using select.select() (available alsmost everythere on *nix nowadays). This also can be written with epoll/poll/kqueue, but select.select() variant could be a good example for you. And major limitations of select.select() (speed and 1024 max fds) are not applicapable for your task. This works under *nix, does not create threads, does not uses signals, can be lauched from any thread (not only main), and fast enought to read 250mb/s of data from stdout on my machine (i5 2.3ghz). There is a problem in join'ing stdout/stderr at the end of communicate. If you have huge program output this could lead to big memory usage. But you can call communicate() several times with smaller timeouts. class Popen(subprocess.Popen): def communicate(self, input=None, timeout=None): if timeout is None: return subprocess.Popen.communicate(self, input) if self.stdin: # Flush stdio buffer, this might block if user # has been writing to .stdin in an uncontrolled # fashion. self.stdin.flush() if not input: self.stdin.close() read_set, write_set = [], [] stdout = stderr = None if self.stdin and input: write_set.append(self.stdin) if self.stdout: read_set.append(self.stdout) stdout = [] if self.stderr: read_set.append(self.stderr) stderr = [] input_offset = 0 deadline = time.time() + timeout while read_set or write_set: try: rlist, wlist, xlist = select.select(read_set, write_set, [], max(0, deadline - time.time())) except select.error as ex: if ex.args[0] == errno.EINTR: continue raise if not (rlist or wlist): # Just break if timeout # Since we do not close stdout/stderr/stdin, we can call # communicate() several times reading data by smaller pieces. break if self.stdin in wlist: chunk = input[input_offset:input_offset + subprocess._PIPE_BUF] try: bytes_written = os.write(self.stdin.fileno(), chunk) except OSError as ex: if ex.errno == errno.EPIPE: self.stdin.close() write_set.remove(self.stdin) else: raise else: input_offset += bytes_written if input_offset >= len(input): self.stdin.close() write_set.remove(self.stdin) # Read stdout / stderr by 1024 bytes for fn, tgt in ( (self.stdout, stdout), (self.stderr, stderr), ): if fn in rlist: data = os.read(fn.fileno(), 1024) if data == '': fn.close() read_set.remove(fn) tgt.append(data) if stdout is not None: stdout = ''.join(stdout) if stderr is not None: stderr = ''.join(stderr) return (stdout, stderr)
You can do this using select import subprocess from datetime import datetime from select import select def call_with_timeout(cmd, timeout): started = datetime.now() sp = subprocess.Popen(cmd, stdout=subprocess.PIPE) while True: p = select([sp.stdout], [], [], timeout) if p[0]: p[0][0].read() ret = sp.poll() if ret is not None: return ret if (datetime.now()-started).total_seconds() > timeout: sp.kill() return None
python 2.7 import time import subprocess def run_command(cmd, timeout=0): start_time = time.time() df = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) while timeout and df.poll() == None: if time.time()-start_time >= timeout: df.kill() return -1, "" output = '\n'.join(df.communicate()).strip() return df.returncode, output
Example of captured output after timeout tested in Python 3.7.8: try: return subprocess.run(command, shell=True, capture_output=True, timeout=20, cwd=cwd, universal_newlines=True) except subprocess.TimeoutExpired as e: print(e.output.decode(encoding="utf-8", errors="ignore")) assert False; The exception subprocess.TimeoutExpired has the output and other members: cmd - Command that was used to spawn the child process. timeout - Timeout in seconds. output - Output of the child process if it was captured by run() or check_output(). Otherwise, None. stdout - Alias for output, for symmetry with stderr. stderr - Stderr output of the child process if it was captured by run(). Otherwise, None. More info: https://docs.python.org/3/library/subprocess.html#subprocess.TimeoutExpired
I've used killableprocess successfully on Windows, Linux and Mac. If you are using Cygwin Python, you'll need OSAF's version of killableprocess because otherwise native Windows processes won't get killed.
Although I haven't looked at it extensively, this decorator I found at ActiveState seems to be quite useful for this sort of thing. Along with subprocess.Popen(..., close_fds=True), at least I'm ready for shell-scripting in Python.
This solution kills the process tree in case of shell=True, passes parameters to the process (or not), has a timeout and gets the stdout, stderr and process output of the call back (it uses psutil for the kill_proc_tree). This was based on several solutions posted in SO including jcollado's. Posting in response to comments by Anson and jradice in jcollado's answer. Tested in Windows Srvr 2012 and Ubuntu 14.04. Please note that for Ubuntu you need to change the parent.children(...) call to parent.get_children(...). def kill_proc_tree(pid, including_parent=True): parent = psutil.Process(pid) children = parent.children(recursive=True) for child in children: child.kill() psutil.wait_procs(children, timeout=5) if including_parent: parent.kill() parent.wait(5) def run_with_timeout(cmd, current_dir, cmd_parms, timeout): def target(): process = subprocess.Popen(cmd, cwd=current_dir, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) # wait for the process to terminate if (cmd_parms == ""): out, err = process.communicate() else: out, err = process.communicate(cmd_parms) errcode = process.returncode thread = Thread(target=target) thread.start() thread.join(timeout) if thread.is_alive(): me = os.getpid() kill_proc_tree(me, including_parent=False) thread.join()
There's an idea to subclass the Popen class and extend it with some simple method decorators. Let's call it ExpirablePopen. from logging import error from subprocess import Popen from threading import Event from threading import Thread class ExpirablePopen(Popen): def __init__(self, *args, **kwargs): self.timeout = kwargs.pop('timeout', 0) self.timer = None self.done = Event() Popen.__init__(self, *args, **kwargs) def __tkill(self): timeout = self.timeout if not self.done.wait(timeout): error('Terminating process {} by timeout of {} secs.'.format(self.pid, timeout)) self.kill() def expirable(func): def wrapper(self, *args, **kwargs): # zero timeout means call of parent method if self.timeout == 0: return func(self, *args, **kwargs) # if timer is None, need to start it if self.timer is None: self.timer = thr = Thread(target=self.__tkill) thr.daemon = True thr.start() result = func(self, *args, **kwargs) self.done.set() return result return wrapper wait = expirable(Popen.wait) communicate = expirable(Popen.communicate) if __name__ == '__main__': from subprocess import PIPE print ExpirablePopen('ssh -T git#bitbucket.org', stdout=PIPE, timeout=1).communicate()
I had the problem that I wanted to terminate a multithreading subprocess if it took longer than a given timeout length. I wanted to set a timeout in Popen(), but it did not work. Then, I realized that Popen().wait() is equal to call() and so I had the idea to set a timeout within the .wait(timeout=xxx) method, which finally worked. Thus, I solved it this way: import os import sys import signal import subprocess from multiprocessing import Pool cores_for_parallelization = 4 timeout_time = 15 # seconds def main(): jobs = [...YOUR_JOB_LIST...] with Pool(cores_for_parallelization) as p: p.map(run_parallel_jobs, jobs) def run_parallel_jobs(args): # Define the arguments including the paths initial_terminal_command = 'C:\\Python34\\python.exe' # Python executable function_to_start = 'C:\\temp\\xyz.py' # The multithreading script final_list = [initial_terminal_command, function_to_start] final_list.extend(args) # Start the subprocess and determine the process PID subp = subprocess.Popen(final_list) # starts the process pid = subp.pid # Wait until the return code returns from the function by considering the timeout. # If not, terminate the process. try: returncode = subp.wait(timeout=timeout_time) # should be zero if accomplished except subprocess.TimeoutExpired: # Distinguish between Linux and Windows and terminate the process if # the timeout has been expired if sys.platform == 'linux2': os.kill(pid, signal.SIGTERM) elif sys.platform == 'win32': subp.terminate() if __name__ == '__main__': main()
Late answer for Linux only, but in case someone wants to use subprocess.getstatusoutput(), where the timeout argument isn't available, you can use the built-in Linux timeout on the beginning of the command, i.e.: import subprocess timeout = 25 # seconds cmd = f"timeout --preserve-status --foreground {timeout} ping duckgo.com" exit_c, out = subprocess.getstatusoutput(cmd) if (exit_c == 0): print("success") else: print("Error: ", out) timeout Arguments: --preserve-status : Preserving the Exit Status --foreground : Running in Foreground 25 : timeout value in seconds
Unfortunately, I'm bound by very strict policies on the disclosure of source code by my employer, so I can't provide actual code. But for my taste the best solution is to create a subclass overriding Popen.wait() to poll instead of wait indefinitely, and Popen.__init__ to accept a timeout parameter. Once you do that, all the other Popen methods (which call wait) will work as expected, including communicate.
https://pypi.python.org/pypi/python-subprocess2 provides extensions to the subprocess module which allow you to wait up to a certain period of time, otherwise terminate. So, to wait up to 10 seconds for the process to terminate, otherwise kill: pipe = subprocess.Popen('...') timeout = 10 results = pipe.waitOrTerminate(timeout) This is compatible with both windows and unix. "results" is a dictionary, it contains "returnCode" which is the return of the app (or None if it had to be killed), as well as "actionTaken". which will be "SUBPROCESS2_PROCESS_COMPLETED" if the process completed normally, or a mask of "SUBPROCESS2_PROCESS_TERMINATED" and SUBPROCESS2_PROCESS_KILLED depending on action taken (see documentation for full details)
for python 2.6+, use gevent from gevent.subprocess import Popen, PIPE, STDOUT def call_sys(cmd, timeout): p= Popen(cmd, shell=True, stdout=PIPE) output, _ = p.communicate(timeout=timeout) assert p.returncode == 0, p. returncode return output call_sys('./t.sh', 2) # t.sh example sleep 5 echo done exit 1
Sometimes you need to process (ffmpeg) without using communicate() and in this case you need asynchronous timeout, a practical way to do this using ttldict pip install ttldict from ttldict import TTLOrderedDict sp_timeout = TTLOrderedDict(default_ttl=10) def kill_on_timeout(done, proc): while True: now = time.time() if sp_timeout.get('exp_time') == None: proc.kill() break process = subprocess.Popen(cmd, stdout=subprocess.PIPE, text=True, stderr=subprocess.STDOUT) sp_timeout['exp_time'] = time.time() done = Event() watcher = Thread(target=kill_on_timeout, args=(done, process)) watcher.daemon = True watcher.start() done.set() for line in process.stdout: .......
Is there any way to stop a background process when you enter pdb?
So, I have some code like this: class ProgressProc(multiprocessing.Process): def __init__(self): multiprocessing.Process.__init__(self) def run(self): while True: markProgress() time.sleep(10) progressProc = ProgressProc() progressProc.start() doSomething() progressProc.terminate() The problem is, if I do pdb.set_trace() inside the doSomething() function, the ProgressProc process keeps going. It keeps printing stuff to the console while the pdb prompt is active. What I would like is to have some way for ProgressProc to check if the main thread (really, any other thread) is suspended in pdb and then I can skip markProgress(). There's sys.gettrace(), but that only works for the thread that did the pdb.set_trace() and I can't figure out how to call it on a different thread than the one I'm in. What else can I do? Are there any signals I can catch? I could have my main method replace pdb.set_trace to call some multiprocessing.Event first. Is there any cleaner way? ETA: This is also for a python gdb command.
Debugging with multiple threads or processes is currently not possible with Pdb. However, I hope the following can help you. This solution suspends the main thread with Pdb and then sends a signal to the other process where Rpdb is started. Open a socket in run. Make sure to set the timeout to 0. When the process receives a signal, start Rpdb with rpdb.set_trace(). signal = 'break' address = ('localhost', 6000) def run(self): listener = Listener(address=address) listener._listener._socket.settimeout(0) recvd = False while True: markProgress() if not recvd: try: conn = listener.accept() msg = conn.recv() if msg == signal: recvd = True rpdb.set_trace() except: pass time.sleep(2) Do set_trace() inside doSomething() like before, then connect to the socket and send the signal. def doSomething(): pdb.set_trace() conn = Client(address) conn.send(signal) conn.close() for x in range(100): time.sleep(1) sys.__stdout__.write("doSomething: " + str(x) + "\n") Now you can start your program. After the signal is sent, you should get the output pdb is running on 127.0.0.1:4444. Now open a second terminal and connect to Rpdb with nc localhost 4444. This only works with two processes/threads. If you want to work with more, you could try starting Rpdb with a different port for each process, like this: rpdb.Rpdb(port=12345) You may have to change all your prints to sys.__stdout__.write, because Rpdb changes stdout.
python why this subprocess command doesn't work as expected
I was trying to use subprocess to extract lines from log file. The intention here is to extract the logs while some program is getting executed and wait for some time copy all the logs to another file. #!/bin/python import threading import time, subprocess class GetLogs(threading.Thread): ''' Get the developer logs ''' def __init__(self): ''' init ''' self.stop = False threading.Thread.__init__(self) print "Initialised thread" def run(self): ''' Collect the logs from devlog ''' command = "tail -f /var/log/developer_log | tee /tmp/log_collected" response = subprocess.check_output(command.split(), shell=True) print "Subprocess called"+str(response) while ( self.stop is False ): time.sleep(1) print "Continuing" continue print "Finished the log file" gl = GetLogs() gl.start() ##Do some activity here print "Sleeping for 10 sec" time.sleep(10) gl.strop = True print "Done" This doesn't work - program gets stuck.
subprocess.check_output() waits for all the output. It waits for the child process to exit or close its STDOUT stream. tail -f never exits and never closes its STDOUT stream. Therefore none of the lines of code below the call to check_output() will execute. As the warning about deadlocks in https://docs.python.org/2/library/subprocess.html suggests, look at using Popen() and communicate() instead.
Trapping a shutdown event in Python
I posted a question about how to catch a "sudo shutdown -r 2" event in Python. I was sent to this thread: Run code in python script on shutdown signal . I'm running a Raspberry Pi v2 with Jessy. I have read about signal and have tried to follow the ideas in the above thread, but so far I have not been successful. Here is my code: import time import signal import sys def CloseAll(Code, Frame): f = open('/mnt/usbdrive/output/TestSignal.txt','a') f.write('Signal Code:' + Code) f.write('Signal Frame:' + Frame) f.write('\r\n') f.close() sys.exit(0) signal.signal(signal.SIGTERM,CloseAll) print('Program is running') try: while True: #get readings from sensors every 15 seconds time.sleep(15) f = open('/mnt/usbdrive/output/TestSignal.txt','a') f.write('Hello ') f.write('\r\n') f.close() except KeyboardInterrupt: f = open('/mnt/usbdrive/output/TestSignal.txt','a') f.write('Done') f.write('\r\n') f.close() The program runs in a "screen" session/window and reacts as expected to a CNTL-C. However, when I exit the screen session, leaving the program running, and enter "sudo shutdown -r 2", the Pi reboots as expected after 2 minutes, but the TestSignal.txt file does not show that the signal.SIGTERM event was processed. What am I doing wrong? Or better yet, how can I trap the shutdown event, usually initiated by a cron job, and close my Python program running in a screen session gracefully?
When you do not try to await such an event, but in a parallel session send SIGTERMto that process (e.g. by calling kill -15 $PID on the process id $PID of the python script running) , you should see an instructive error message ;-) Also the comment about the mount point should be of interest after you repaired the python errors (TypeError: cannot concatenate 'str' and 'int' objects). Try something like: import time import signal import sys LOG_PATH = '/mnt/usbdrive/output/TestSignal.txt' def CloseAll(Code, Frame): f = open(LOG_PATH, 'a') f.write('Signal Code:' + str(Code) + ' ') f.write('Signal Frame:' + str(Frame)) f.write('\r\n') f.close() sys.exit(0) signal.signal(signal.SIGTERM, CloseAll) print('Program is running') try: while True: # get readings from sensors every 15 seconds time.sleep(15) f = open(LOG_PATH, 'a') f.write('Hello ') f.write('\r\n') f.close() except KeyboardInterrupt: f = open(LOG_PATH, 'a') f.write('Done') f.write('\r\n') f.close() as a starting point. If this works somehow on your system why not rewrite some portions like: # ... 8< - - - def close_all(signum, frame): with open(LOG_PATH, 'a') as f: f.write('Signal Code:%d Signal Frame:%s\r\n' % (signum, frame)) sys.exit(0) signal.signal(signal.SIGTERM, close_all) # 8< - - - ... Edit: To further isolate the error and adapt more to production like mode, one might rewrite the code like this (given that syslog is running on the machine, which it should, but I never worked on devices of that kind): #! /usr/bin/env python import datetime as dt import time import signal import sys import syslog LOG_PATH = 'foobarbaz.log' # '/mnt/usbdrive/output/TestSignal.txt' def close_all(signum, frame): """Log to system log. Do not spend too much time after receipt of TERM.""" syslog.syslog(syslog.LOG_CRIT, 'Signal Number:%d {%s}' % (signum, frame)) sys.exit(0) # register handler for SIGTERM(15) signal signal.signal(signal.SIGTERM, close_all) def get_sensor_readings_every(seconds): """Mock for sensor readings every seconds seconds.""" time.sleep(seconds) return dt.datetime.now() def main(): """Main loop - maybe check usage patterns for file resources.""" syslog.syslog(syslog.LOG_USER, 'Program %s is running' % (__file__,)) try: with open(LOG_PATH, 'a') as f: while True: f.write('Hello at %s\r\n' % ( get_sensor_readings_every(15),)) except KeyboardInterrupt: with open(LOG_PATH, 'a') as f: f.write('Done at %s\r\n' % (dt.datetime.now(),)) if __name__ == '__main__': sys.exit(main()) Points to note: the log file for the actual measurements is separate from the logging channel for operational alerts the log file handle is safeguarded in context managing blocks and in usual operation is just kept open for alerting the syslog channel is used. as a sample for the message routing the syslog.LOG_USER on my system (OS X) gives me in all terminals a message, whilst the syslog.LOG_ERR priority message in signal handler only targets the system log. should be more to the point during shutdown hassle (not opening a file, etc.) The last point (5.) is important in case all processes receive a SIGTERM during shutdown, i.e. all want to do something (slowing things down), maybe screenalso does not accept any buffered input anymore (or does not flush), note stdout is block buffered not line buffered. The decoupling of the output channels, should also ease the eventual disappearance of the mount point of the measurement log file.
ReactorNotRestartable error
I have a tool, where i am implementing upnp discovery of devices connected in network. For that i have written a script and used datagram class in it. Implementation: whenever scan button is pressed on tool, it will run that upnp script and will list the devices in the box created in tool. This was working fine. But when i again press the scan button, it gives me following error: Traceback (most recent call last): File "tool\ui\main.py", line 508, in updateDevices upnp_script.main("server", localHostAddress) File "tool\ui\upnp_script.py", line 90, in main reactor.run() File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1191, in run self.startRunning(installSignalHandlers=installSignalHandlers) File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1171, in startRunning ReactorBase.startRunning(self) File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 683, in startRunning raise error.ReactorNotRestartable() twisted.internet.error.ReactorNotRestartable Main function of upnp script: def main(mode, iface): klass = Server if mode == 'server' else Client obj = klass obj(iface) reactor.run() There is server class which is sending M-search command(upnp) for discovering devices. MS = 'M-SEARCH * HTTP/1.1\r\nHOST: %s:%d\r\nMAN: "ssdp:discover"\r\nMX: 2\r\nST: ssdp:all\r\n\r\n' % (SSDP_ADDR, SSDP_PORT) In server class constructor, after sending m-search i am stooping reactor reactor.callLater(10, reactor.stop) From google i found that, we cannot restart a reactor beacause it is its limitation. http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhycanttheTwistedsreactorberestarted Please guide me how can i modify my code so that i am able to scan devices more than 1 time and don't get this "reactor not restartable error"
In response to "Please guide me how can i modify my code...", you haven't provided enough code that I would know how to specifically guide you, I would need to understand the (twisted part) of the logic around your scan/search. If I were to offer a generic design/pattern/mental-model for the "twisted reactor" though, I would say think of it as your programs main loop. (thinking about the reactor that way is what makes the problem obvious to me anyway...) I.E. most long running programs have a form something like def main(): while(True): check_and_update_some_stuff() sleep 10 That same code in twisted is more like: def main(): # the LoopingCall adds the given function to the reactor loop l = task.LoopingCall(check_and_update_some_stuff) l.start(10.0) reactor.run() # <--- this is the endless while loop If you think of the reactor as "the endless loop that makes up the main() of my program" then you'll understand why no-one is bothering to add support for "restarting" the reactor. Why would you want to restart an endless loop? Instead of stopping the core of your program, you should instead only surgically stop the task inside that is complete, leaving the main loop untouched. You seem to be implying that the current code will keep "sending m-search"s endlessly when the reactor is running. So change your sending code so it stops repeating the "send" (... I can't tell you how to do this because you didn't provide code, but for instance, a LoopingCall can be turned off by calling its .stop method. Runnable example as follows: #!/usr/bin/python from twisted.internet import task from twisted.internet import reactor from twisted.internet.protocol import Protocol, ServerFactory class PollingIOThingy(object): def __init__(self): self.sendingcallback = None # Note I'm pushing sendToAll into here in main() self.l = None # Also being pushed in from main() self.iotries = 0 def pollingtry(self): self.iotries += 1 if self.iotries > 5: print "stoping this task" self.l.stop() return() print "Polling runs: " + str(self.iotries) if self.sendingcallback: self.sendingcallback("Polling runs: " + str(self.iotries) + "\n") class MyClientConnections(Protocol): def connectionMade(self): print "Got new client!" self.factory.clients.append(self) def connectionLost(self, reason): print "Lost a client!" self.factory.clients.remove(self) class MyServerFactory(ServerFactory): protocol = MyClientConnections def __init__(self): self.clients = [] def sendToAll(self, message): for c in self.clients: c.transport.write(message) # Normally I would define a class of ServerFactory here but I'm going to # hack it into main() as they do in the twisted chat, to make things shorter def main(): client_connection_factory = MyServerFactory() polling_stuff = PollingIOThingy() # the following line is what this example is all about: polling_stuff.sendingcallback = client_connection_factory.sendToAll # push the client connections send def into my polling class # if you want to run something ever second (instead of 1 second after # the end of your last code run, which could vary) do: l = task.LoopingCall(polling_stuff.pollingtry) polling_stuff.l = l l.start(1.0) # from: https://twistedmatrix.com/documents/12.3.0/core/howto/time.html reactor.listenTCP(5000, client_connection_factory) reactor.run() if __name__ == '__main__': main() This script has extra cruft in it that you might not care about, so just focus on the self.l.stop() in PollingIOThingys polling try method and the l related stuff in main() to illustrates the point. (this code comes from SO: Persistent connection in twisted check that question if you want to know what the extra bits are about)