I'm running the following piece of code:
p = subprocess.getoutput("python ./file.py")
How do I ensure the python version used is python3?
Thanks!
Assuming your controlling script is running in python 3, and thus you are simply wanting to run a subprocess of an identical version of python, then try:
import sys
p = subprocess.getoutput("'{}' ./file.py < input.txt".format(sys.executable))
The obvious way to ensure python 3.x is used is:
p = subprocess.getoutput("python3 ./file.py < input.txt")
Related
I have multiple python scripts, each with print statements and prompts for input. I run these scripts from a single python script as below.
os.system('python script1.py ' + sys.argv[1])
os.system('python script2.py ' + sys.argv[1]).....
The run is completed successfully, however, when I run all the scripts from a single file, I no longer see any print statements or prompts for input on the run console. Have researched and attempted many different ways to get this to work w/o success. Help would be much appreciated. Thanks.
If I understand correctly you want to run multiple python scripts synchronously, i.e. one after another.
You could use a bash script instead of python, but to answer your question of starting them from python...
Checkout out the subprocess module: https://docs.python.org/3.4/library/subprocess.html
In particular the call method, it accepts a stdin and stdout which you can pass sys.stdin and sys.stdout to.
import sys
import subprocess
subprocess.call(['python', 'script1.py', sys.argv[1]], stdin=sys.stdin, stdout=sys.stdout)
subprocess.call(['python', 'script2.py', sys.argv[1]], stdin=sys.stdin, stdout=sys.stdout)
^
This will work in python 2.7 and 3, another way of doing this is by importing your file (module) and calling the methods in it. The difference here is that you're no longer running the code in a separate process.
subroutine.py
def run_subroutine():
name = input('Enter a name: ')
print(name)
master.py
import subroutine
subroutine.run_subroutine()
The following code (taken from - https://github.com/dennybritz/tf-rnn/blob/master/bidirectional_rnn.ipynb)
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
# Create input data
X = np.random.randn(2, 10, 8)
# The second example is of length 6
X[1,6:] = 0
X_lengths = [10, 6]
cell = tf.contrib.rnn.LSTMCell(num_units=64, state_is_tuple=True)
outputs, states = tf.nn.bidirectional_dynamic_rnn(
cell_fw=cell,
cell_bw=cell,
dtype=tf.float64,
sequence_length=X_lengths,
inputs=X)
output_fw, output_bw = outputs
states_fw, states_bw = states
is giving the following error for
tensorflow - 1.1 for both 2.7 and 3.5
ValueError: Attempt to reuse RNNCell <tensorflow.contrib.rnn.python.ops.core_rnn_cell_impl.LSTMCell object at 0x10ce0c2b0>
with a different variable scope than its first use. First use of cell was with scope
'bidirectional_rnn/fw/lstm_cell', this attempt is with scope 'bidirectional_rnn/bw/lstm_cell'.
Please create a new instance of the cell if you would like it to use a different set of weights.
If before you were using: MultiRNNCell([LSTMCell(...)] * num_layers), change to:
MultiRNNCell([LSTMCell(...) for _ in range(num_layers)]). If before you were using the same cell
instance as both the forward and reverse cell of a bidirectional RNN, simply create two instances
(one for forward, one for reverse). In May 2017, we will start transitioning this cell's behavior to use
existing stored weights, if any, when it is called with scope=None (which can lead to silent model degradation,
so this error will remain until then.)
But it is working in
tensorflow - 1.0.1 for python 3.5 (did not test on python - 2.7)
I tried with multiple code examples I found online but
tf.nn.bidirectional_dynamic_rnn
is giving the same error with tensorflow - 1.1
Is there a bug in tensorflow 1.1 or am i just missing something?
Sorry you ran into this. I can confirm that the error appears in 1.1 (docker run -it gcr.io/tensorflow/tensorflow:1.1.0 python) but not in 1.2 RC0 (docker run -it gcr.io/tensorflow/tensorflow:1.2.0-rc0 python).
So it looks like either 1.2-rc0 or 1.0.1 are your options for the moment.
I am a novice user of a cluster running in RedHat Enterprise Linux. I run python script (version 2.6.5) by using bsub command. Somehow this python program just stops during the multiprocessing. The program goes like:
from multiprocessing import Pool
import multiprocessing
def pop_genomics(chrom):
os.system('run analysis on DNA')
os.system('run analysis on DNA')
os.system('run analysis on DNA')
os.system('run analysis on DNA')
print 'Finished!'
return 'Done'
pool = multiprocessing.Pool(multiprocessing.cpu_count())
finalfiledirs=pool.map(pop_genomics, chroms)
pool.close()
pool.join()
I get 'Finished!' message from all workers, but this program does not proceed beyond that 'finalfiledirs=pool.map(pop_genomics, chroms)' line. Can you suggest why this is happening?
You should be getting an error on that line because on the
pool.map(pop_genomics,chroms)
you never pass any parameters to pop_genomics, so you need to add some so it would be:
pool.map(pop_genomics(parameters),chroms)
I have just started with using IPython Notebook and have been fascinated by its power. I have been using a few examples available on the net to get started with. I was following this tutorial: http://nbviewer.ipython.org/url/finiterank.com/cuadernos/suavesylocas.ipynb but the maths output is not getting rendered as expected. Below is the my code and the output:
In [30]:
%load_ext sympyprinting
%pylab inline
from __future__ import division
import sympy as sym
from sympy import *
init_printing()
x,y,z=symbols("x y z")
k,m,n=symbols("k m n", integer=True)
The sympyprinting extension is already loaded. To reload it, use:
%reload_ext sympyprinting
Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.kernel.zmq.pylab.backend_inline].
For more information, type 'help(pylab)'.
In [31]:
t = sin(2*pi*x*(k**2))/ (4*(pi**2)*(k**5)) + (x**2) / (2*k)
t
Out[31]:
2 ⎛ 2 ⎞
x sin⎝2⋅π⋅k ⋅x⎠
─── + ─────────────
2⋅k 2 5
4⋅π ⋅k
I have tried other examples also, and they are also not getting rendered properly. Where am I going wrong?
I had the same problem. Try
from sympy.interactive import printing
printing.init_printing(use_latex=True)
instead of
%load_ext sympyprinting
I am using sympy 0.7.2
I recently had the same problem, and I'm using Linux Crunchbang, which is a derivative of Redhat I think. Originally I installed sympy using
pip install sympy
However, this led to the above problem as described. So then I went to the sympy webpage and cloned the git repository to a folder. Then it can be installed (once in the local folder) by using
python setup.py install
After that everything worked fine, so I think it had something to do with the version used. For the record, the commands I used to initialize the printing in python were
import sympy
sympy.init_printing()
Import:
from sympy import *
init_printing()
Example:
x = symbols('x')
a = Integral(cos(x)*exp(x), x)
Eq(a, a.doit())
Output:
i would like to pass values from python to a c++ program for an encryption from inside a python program and then return the value from there to the python program . how to do it?
If you want to use some existing Unix-style command line utility that reads from stdin and writes to stdout, you can use subprocess.Popen by using Popen.communicate():
import subprocess
p = subprocess.Popen(["/your/app"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output = p.communicate(input)[0]
As said msw in the other post, the proper solution is using PyObject.
If you want to have a two-way communication between C++ & Python, Boost Python would be interesting for you. Take a look at website Boost Python,
This post would also be interesting:
How to expose a C++ class to Python without building a module