subprocess AttributeError: 'module' object has no attribute 'check_ouput' aix - python-2.7

Not sure why im getting this error as subprocess check_output is in python 2.7 which im running?
ctmtest1-tctmsv80 [10] python del_jobs_main.py
sys.version_info(major=2, minor=7, micro=5, releaselevel='final', serial=0)
Traceback (most recent call last):
File "del_jobs_main.py", line 51, in <module>
sys.exit(main())
File "del_jobs_main.py", line 42, in main
p_call = do_sh_shell_command('env | grep machine')
File "del_jobs_main.py", line 33, in do_sh_shell_command
p = subprocess.check_ouput(string_command, shell=True, stdin=subprocess.PIPE,
AttributeError: 'module' object has no attribute 'check_ouput'
any help on this weird error would be greatly appreciated

Related

AttributeError: 'module' object has no attribute 'DEVNULL'

from nltk.parse.corenlp import CoreNLPServer
server = CoreNLPServer()
server.start()
When I run the above code, I'm getting the following error.
Traceback (most recent call last):
File "server.py", line 30, in <module>
server.start()
File "/usr/local/lib/python2.7/dist-packages/nltk/parse/corenlp.py", line 130, in start
stderr=stderr,
File "/usr/local/lib/python2.7/dist-packages/nltk/internals.py", line 112, in java
subprocess_output_dict = {'pipe': subprocess.PIPE, 'stdout': subprocess.STDOUT, 'devnull': subprocess.DEVNULL}
AttributeError: 'module' object has no attribute 'DEVNULL'
subprocess.devnull is new in Python 3.3.
Make sure you use a version of nltk which stil supports Python 2.7. From their changelog:
Version 3.5 2019-10-16
* drop support for Python 2

NoneType has no attribute 'select' KerasDML SystemML

Im have some issue while running example Keras2DML code in this page. While running the code, i've got this error:
Traceback (most recent call last):
File "/home/fregy/kerasplayground/sysml/examplenn.py", line 12, in <module>
sysml_model = Keras2DML(spark, keras_model,input_shape=(3,224,224))
File "/usr/local/lib/python2.7/dist-packages/systemml/mllearn/estimators.py", line 909, in __init__
convertKerasToCaffeNetwork(keras_model, self.name + ".proto")
File "/usr/local/lib/python2.7/dist-packages/systemml/mllearn/keras2caffe.py", line 201, in convertKerasToCaffeNetwork
jsonLayers = list(chain.from_iterable(imap(lambda layer: _parseKerasLayer(layer), kerasModel.layers)))
File "/usr/local/lib/python2.7/dist-packages/systemml/mllearn/keras2caffe.py", line 201, in <lambda>
jsonLayers = list(chain.from_iterable(imap(lambda layer: _parseKerasLayer(layer), kerasModel.layers)))
File "/usr/local/lib/python2.7/dist-packages/systemml/mllearn/keras2caffe.py", line 137, in _parseKerasLayer
ret = { 'layer': { 'name':layer.name, 'type':supportedLayers[layerType], 'bottom':_getBottomLayers(layer), 'top':layer.name, paramName:param[paramName] } }
File "/usr/local/lib/python2.7/dist-packages/systemml/mllearn/keras2caffe.py", line 112, in _getBottomLayers
return [ bottomLayer.name for bottomLayer in _getInboundLayers(layer) ]
File "/usr/local/lib/python2.7/dist-packages/systemml/mllearn/keras2caffe.py", line 70, in _getInboundLayers
for node in layer.inbound_nodes: # get inbound nodes to current layer
AttributeError: 'Conv2D' object has no attribute 'inbound_nodes'
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/SocketServer.py", line 230, in serve_forever
r, w, e = _eintr_retry(select.select, [self], [], [],
AttributeError: 'NoneType' object has no attribute 'select'
Im using Tensorflow-GPU 1.5 , and Keras 2.1.3 .
Thanks for trying out Keras2DML. The issue arises because the newer Keras versions renamed the attribute inbound_nodes to _inbound_nodes. This issue was fixed in yesterday's commit: https://github.com/apache/systemml/commit/9c3057a34c84d5bf1c698ad0a5c3c34d90412dbb.
Since you are using TensorFlow-GPU, you may want to check if TF grabs onto most of GPU memory when Keras model is compiled using nvidia-smi. If yes, here are two easy workarounds:
a. Hide GPUs from TF:
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import tensorflow as tf
b. Or minimize the overhead due to TensorFlow:
from keras.backend.tensorflow_backend import set_session
tf_config = tf.ConfigProto()
tf_config.gpu_options.allow_growth = True
set_session(tf.Session(config=tf_config))

Python can't run ANY script

I have Python 2.7.14 64-bit version installed and I can't run the simplest of scripts. It works normal if I copy to Shell and run from there. This is the script I'm trying to run.
def fun(a):
print a
return
And the error message
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1541, in __call__
return self.func(*args)
File "C:\Python27\lib\idlelib\MultiCall.py", line 166, in handler
r = l[i](event)
File "C:\Python27\lib\idlelib\ScriptBinding.py", line 149, in run_module_event
if PyShell.use_subprocess:
AttributeError: 'module' object has no attribute 'use_subprocess'

Error when i try to OpenRAM, AttributeErrir: 'NoneType' object has no attribute 'startswith'

I get the error. how should i fix it?
ERROR: runTest (30_openram_test.openram_test)
Traceback (most recent call last): File "/home/leepil/OpenRAM-master/compiler/tests/30_openram_test.py", line 23, in runTest
globals.init_openram("config_20_{0}".format(OPTS.tech_name))
enter code here
File "/home/leepil/OpenRAM-master/compiler/tests/../globals.py", line 113, in init_openram
import_tech()
File "/home/leepil/OpenRAM-master/compiler/tests/../globals.py", line 314, in import_tech
__import__(filename)
File "/home/leepil/OpenRAM-master/technology/setup_scripts/setup_openram_freepdk45.py", line 16, in <module>
PDK_DIR=os.path.abspath(os.environ.get("FREEPDK45"))
File "/home/leepil/PycellStudio3/plat_linux_gcc44x_64/3rd/lib/python2.7/posixpath.py", line 367, in abspath
if not isabs(path):
File "/home/leepil/PycellStudio3/plat_linux_gcc44x_64/3rd/lib/python2.7/posixpath.py", line 61, in isabs
return s.startswith('/')
AttributeError: 'NoneType' object has no attribute 'startswith'
For info, I am on OpenRAM-Master (Tech is freepdk45) & Python version 2.7.9.
See the detail error message below. Thx!

Google Vision Python 2.7 TypeError: construct_settings() got an unexpected keyword argument 'metrics_headers'

After installing the required packages using pip, downloading a Json key and setting the enviroment variable in the cmd window with: set GOOGLE_APPLICATION_CREDENTIALS = 'C:\Users\ xxx .json' and following the instructions to use the Google Vision API on https://googlecloudplatform.github.io/google-cloud-python/stable/vision-usage.html#authentication-and-configuration
I tried the following and got the following error without any idea how to solve the error, so all suggestions are much appreciated
>>> from google.cloud import vision
>>> client =vision.Client()
>>> print client
<google.cloud.vision.client.Client object at 0x08D414F0>
>>> image = client.image(filename='test2.jpg')
>>> print image
<google.cloud.vision.image.Image object at 0x0CBF68F0>
>>> text = image.detect_text()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\google\cloud\vision\image.py", line 289, in detect_text
annotations = self.detect(features)
File "C:\Python27\lib\site-packages\google\cloud\vision\image.py", line 143, in detect
return self._detect_annotation(images)
File "C:\Python27\lib\site-packages\google\cloud\vision\image.py", line 117, in _detect_annotation
return self.client._vision_api.annotate(images)
File "C:\Python27\lib\site-packages\google\cloud\vision\client.py", line 114, in _vision_api
self._vision_api_internal = _GAPICVisionAPI(self)
File "C:\Python27\lib\site-packages\google\cloud\vision\_gax.py", line 34, in __init__
lib_version=__version__)
File "C:\Python27\lib\site-packages\google\cloud\gapic\vision\v1\image_annotator_client.py", line 140, in __init__
metrics_headers=metrics_headers, )
TypeError: construct_settings() got an unexpected keyword argument 'metrics_headers'