python: reading executable's stdout, broken stream - c++

I am trying to read the output of an executable (A) which is written in c++ from my python script. I am working in Linux. The only way I have known so far is through the subprocess library
Firstly I tried
p = Popen(['executable', '-arg_flag1', arg1 ...], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
print "reach here"
stdout_output = p.communicate()[0]
print stdout_output
sys.stdin.read(1)
which turned out to hang up both my executable (with 99% cpu usage) and my script :S:S:S
Moreover reach here is printed.
After that I tried:
f = open ("out.txt", 'r+')
command = 'executable -arg_flag1 arg1 ... '
subprocess.call(command, shell=True, stdout=f)
f.seek(0)
content = f.read()
and this works but I get an output where some chars at the end of the content are repeated or even more values produced than expected :S
Anyway could someone enlighten me of a more proper way to do this?
Thanks in advance

The first solution is best. Using shell=True is slower, and has security issues.
The problem is Popen doesn't wait for the process to complete, so Python stops leaving the process without stdout, stdin and stderr. Causing that process to go wild. Adding p.wait() should do the trick!
Also, using communicate is a loss of time. Just do stdout_output = p.stdout.read(). You'll have to check yourself if stdout_output contains anything though, but this is still nicer than using communicate()[0].

Related

Redirecting Pipe Output in C++

I’m looking for a C/C++ way to execute a shell command and store the output of that command in a std::string or file, without the output being automatically printed to the console.
All approaches I’ve seen can do exactly that, but also print the execution result to the console.
For example with a combination of
FILE* pipe = popen("ls", "r");
and
fgets()
I’m able to do just that, but with the printing to the console.
Is there perhaps a way to redirect the stream’s buffer from std::cout to std::sstream, or has it semething to do with Ubuntu?
Any help is appreciated :)
The 2>&1 part from the comments did it.

Trying to get some output from subprocess.Popen works for all commands but bzip2

I am trying to get the output from subprocess.Popen assign it to a variable and then work with it in the rest of my program however it is executing code without ever assigning it to my variable
my current line of code is
result = subprocess.Popen('bzip2 --version', shell=True, stdout=subprocess.PIPE).communicate()[0]
currently to test it im just printing the length and the results which are currently empty
it does execute the code but it shows up in the terminal prior to my prints
I have tried the above-mentioned code using other commands and it works just as I would expect
any suggestions on how I can go about doing this?
Seems bzip2 writes to stderr instead of to stdout.
result = subprocess.Popen('bzip2 --version', shell=True, stderr=subprocess.PIPE).communicate()[1]

Python popen doesn't capture stderror

I need to be able read stdout and stderr as it occurs from a process that I spawn in Python, I am currently using:
task = Popen('sh job.sh', stdout=PIPE, bufsize=1)
with task.stdout:
for line in iter(task.stdout.readline, b''):
stream.append(line)
fileHandle.write(line)
This is getting the stdout, but stderr is getting sent to the console:
./tmp_2edd9d49-4108-43e8-a09f-30f34488c531: line 1: #echo: command not found
I tried adding stderr=PIPE, but that made the errors vanish. Is there a way of doing this so I can read both (I really would like the error to occur at right place.
You can't omit the stderr argument if you want to capture it!
import subprocess as shell
raw_cmd = 'sh job.sh'
cmd_list = raw_cmd.split()
task = shell.Popen.(cmd_lst, stdout=shell.PIPE, stderr=shell.PIPE)
with task.stderr as stderr:
for line in stderr:
print line
with task.stdout as stdout:
for line in stdout:
print line
Basically the extern program writes into two files: stdout and stderr, we plug these "out-files" into our program. The way we are doing that in this example allows only to track the output of either stderr, or stdout in total, so right now there is no correlation.
To track both files simultaneously, you would have to fall back to select, pool, or epoll. Depending on installed libraries and OS.
e.g. on linux:
...
from select import select
...
while 1:
# `select` blocks until any file is ready !!!
reads, writes, errors = select([task.stdout, task.stderr], [], [])
for stdfile in reads:
if stdfile == task.stdout:
for line in stdfile: print "stdout:", line
if stdfile == task.stderr:
for line in stdfile: print "stdERR:", line
...
Beware, the code above is untested, but would allow a tighter out/err correleation. This is also not an optimal solution, just a pointer to possible venues.
You let select block until any of the specified files/PIPES are ready. Then you check which file is ready (e.g if stdfile == task.stderr) you print it and repeat the loop with select.
If you don't want this loop to block, you could move them into a separate therad, or make select non-blocking and do multiple polls (see select).

subprocess.popen stream handling

Is it possible to prevent subprocess.popen from showing prompts in the terminal?
Attempting to map a drive but would like to read the prompt for credentials in the script rather than display them to the terminal. The idea being I can carry out actions based on the response.
I am aware the use of shell is frowned upon when using string based commands (for obvious reasons), however I'm controlling the input so am happy with the risk for testing purposes.
I was under the impression that all stdout (interaction) would be parsed into the output_null variable. Instead I am still getting the prompt in the terminal (as illustrated below). I'm either miss understanding how the streams work or I'm missing something. Can anyone enlighten me please
command = "mount -t smbfs //{s}/SYSVOL {m}".format(s=server, m=temp_dir)
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output_null = p.communicate()[0]
if "Password for" in output_null:
print 'awdaa'
Terminal Shows
Password for 192.168.1.111:

Avoid deadlock using Popen without using sleep (Python 2.7)

I have a problem with deadlock using this Python script that parses the output produced
piping two programs and stores the result in a directory x.
import subprocess as sp
from time import sleep
p1 = sp.Popen(['executable_1'], stdout=sp.PIPE , stderr = sp.STDOUT)
p2 = sp.Popen(['executable_2'], stdin=p1.stdout, stdout = sp.PIPE)
x = my_parser(p2.stdout)
However if I change the script using p2 = sp.Popen(executable_2, stdin=p1.stdout, stdout = sp.PIPE, preexec_fn = time.sleep(0.1)) everything seems to be working fine.
The solution though doesn't seem very clean to me. I understand that waiting for a bit of time I give the possibility to p1 to flush its output to stdout, (although if I manually try p1.stdout.flush() I sometimes get IOError as well).
I can't use communicate() because the output of p2 is quite large and I want to process the data while executable_2 is still in execution.
How can I prevent the deadlock in this case without using sleep()?