error at interactive python console - python-2.7

I have a simple class created with the editor pycharm . I want to initiate several objects with different input in the ipython console and check how different objects of the class type behaves. Is it even possible to do it in the console ? I am ending up with the following error:
rect = Rectangle(5,4)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-
packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-14-48f6e104f766>", line 1, in <module>
rect = Rectangle(5,4)
NameError: name 'Rectangle' is not defined
I must have missed something in the process.
--------EDIT------------------
I tried the the very same thing at Spyder editor without importing anything there and it works fine there.
Did you mean -
import Rectangle
I tried it as well and then I got the following error:
import Rectangle
Traceback (most recent call last):
File "/usr/lib/python2.7/dist -
packages/IPython/core/interactiveshell.py", line 2883, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-15-6878d2e924f4>", line 1, in <module>
import Rectangle
File "/home/sajjad/Hämtningar/programvara/PyCharm/pycharm-community-2017.1/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
ImportError: No module named Rectangle
---------EDIT1------------------
import os
print os.getcwd()
/home/sajjad/PycharmProjects/PyQt/book/chap03
And the Rectangle.py is within the above directory.

Related

Google Vision Python 2.7 TypeError: construct_settings() got an unexpected keyword argument 'metrics_headers'

After installing the required packages using pip, downloading a Json key and setting the enviroment variable in the cmd window with: set GOOGLE_APPLICATION_CREDENTIALS = 'C:\Users\ xxx .json' and following the instructions to use the Google Vision API on https://googlecloudplatform.github.io/google-cloud-python/stable/vision-usage.html#authentication-and-configuration
I tried the following and got the following error without any idea how to solve the error, so all suggestions are much appreciated
>>> from google.cloud import vision
>>> client =vision.Client()
>>> print client
<google.cloud.vision.client.Client object at 0x08D414F0>
>>> image = client.image(filename='test2.jpg')
>>> print image
<google.cloud.vision.image.Image object at 0x0CBF68F0>
>>> text = image.detect_text()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\google\cloud\vision\image.py", line 289, in detect_text
annotations = self.detect(features)
File "C:\Python27\lib\site-packages\google\cloud\vision\image.py", line 143, in detect
return self._detect_annotation(images)
File "C:\Python27\lib\site-packages\google\cloud\vision\image.py", line 117, in _detect_annotation
return self.client._vision_api.annotate(images)
File "C:\Python27\lib\site-packages\google\cloud\vision\client.py", line 114, in _vision_api
self._vision_api_internal = _GAPICVisionAPI(self)
File "C:\Python27\lib\site-packages\google\cloud\vision\_gax.py", line 34, in __init__
lib_version=__version__)
File "C:\Python27\lib\site-packages\google\cloud\gapic\vision\v1\image_annotator_client.py", line 140, in __init__
metrics_headers=metrics_headers, )
TypeError: construct_settings() got an unexpected keyword argument 'metrics_headers'

nltk lookup error in Stanford Neural Dependency Parser

I am trying to use the Stanford Neural Dependency Parser provided by nltk. The problem I'm having is that when I call st = nltk.parse.stanford.StanfordNeuralDependencyParser(), I get the following error:
>>> st = nltk.parse.stanford.StanfordNeuralDependencyParser()
Traceback (most recent call last):
File "C:\Users\<user>\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-ca2dec4f3c1f>", line 1, in <module>
st = nltk.parse.stanford.StanfordNeuralDependencyParser()
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 378, in __init__
super(StanfordNeuralDependencyParser, self).__init__(*args, **kwargs)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 51, in __init__
key=lambda model_name: re.match(self._JAR, model_name)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\internals.py", line 714, in find_jar_iter
raise LookupError('\n\n%s\n%s\n%s' % (div, msg, div))
LookupError:
===========================================================================
NLTK was unable to find stanford-corenlp-(\d+)(\.(\d+))+\.jar! Set
the CLASSPATH environment variable.
For more information, on stanford-corenlp-(\d+)(\.(\d+))+\.jar, see:
<http://nlp.stanford.edu/software/lex-parser.shtml>
===========================================================================
But, when I run os.environ.get('CLASSPATH') I get the result
`C:\nltk_data\;C:\nltk_data\stanford\;C:\nltk_data\stanford\stanford-ner\`
I know that I have the corenlp jar file in C:\nltk_data\stanford\ so I run the following and end up with a slightly different error.
>>> st = nltk.parse.stanford.StanfordNeuralDependencyParser('C:\\nltk_data\\stanford\\')
Traceback (most recent call last):
File "C:\Users\<user>\Anaconda2\lib\site-packages\IPython\core\interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-22-28d797d702d9>", line 1, in <module>
st = StanfordNeuralDependencyParser('C:\\nltk_data\\stanford\\')
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 378, in __init__
super(StanfordNeuralDependencyParser, self).__init__(*args, **kwargs)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\parse\stanford.py", line 51, in __init__
key=lambda model_name: re.match(self._JAR, model_name)
File "C:\Users\<user>\Anaconda2\lib\site-packages\nltk\internals.py", line 635, in find_jar_iter
(name_pattern, path_to_jar))
LookupError: Could not find stanford-corenlp-(\d+)(\.(\d+))+\.jar jar file at C:\nltk_data\stanford\
I have downloaded the jar stanford-english-corenlp-2016-01-10-models.jar from the Stanford NLP website and also renamed it to stanford-corenlp-2016-01-10.jar to try and match the pattern but I was still end up with the same errors. I have also downloaded the Stanford Parser version 3.6.0 but it doesn't contain any corenlp files.
Is there any way to get this to work, or am I misunderstanding something?

How to resolve "_tkinter.TclError: unknown option"?

I am learning python tkinter but I have an error whenever I tried to compile it:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/spyderlib/
widgets/externalshell/sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "/home/jason/.spyder2/.temp.py", line 14, in <module>
menu.config(menu=menu)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1274,
inconfigure
return self._configure('configure', cnf, kw)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1265,
in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: unknown option "-menu").
My code is:
from tkinter import *
def hello():
print "hello"
root = Tk()
menu = Menu(root)
menu.config(menu=menu)
menu.add_command(label ="new",command = hello)
root.mainloop()
You have an issue with your code a little bit. Instead of menu.config, use root.config, you will not get any such errors.
For more information and detailed tutorial, kindly visit Tkinter Menu Widget.

How can I resolve the "import multiarray" error in parallel python?

I am trying to run a python 2.7 script which uses parallel python (version 1.6.1) to excute a function which uses numpy arrays (numpy version 1.6.1) on a Ubuntu Voyager (Ubuntu 12.04 derivative) system. It gives me the following error message (actually, it's still longer, repeated 12 times I guess):
Starting pp with 12 workers
* An error has occured during the module import
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/ppworker.py", line 49, in preprocess
exec module
File "<string>", line 1, in <module>
File "/usr/share/pyshared/numpy/__init__.py", line 137, in <module>
import add_newdocs
File "/usr/share/pyshared/numpy/add_newdocs.py", line 9, in <module>
from numpy.lib import add_newdoc
File "/usr/share/pyshared/numpy/lib/__init__.py", line 4, in <module>
from type_check import *
File "/usr/share/pyshared/numpy/lib/type_check.py", line 8, in <module>
import numpy.core.numeric as _nx
File "/usr/share/pyshared/numpy/core/__init__.py", line 5, in <module>
import multiarray
ImportError: No module named multiarray
A fatal error has occured during the function execution
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/ppworker.py", line 86, in run
__args = pickle.loads(__sargs)
File "/usr/share/pyshared/numpy/__init__.py", line 137, in <module>
import add_newdocs
File "/usr/share/pyshared/numpy/add_newdocs.py", line 9, in <module>
from numpy.lib import add_newdoc
File "/usr/share/pyshared/numpy/lib/__init__.py", line 4, in <module>
from type_check import *
File "/usr/share/pyshared/numpy/lib/type_check.py", line 8, in <module>
import numpy.core.numeric as _nx
File "/usr/share/pyshared/numpy/core/__init__.py", line 5, in <module>
import multiarray
ImportError: No module named multiarray
Exception in thread run_local:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/pymodules/python2.7/pp.py", line 719, in _run_local
job.finalize(sresult)
UnboundLocalError: local variable 'sresult' referenced before assignment
Exception in thread run_local:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 504, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/pymodules/python2.7/pp.py", line 719, in _run_local
job.finalize(sresult)
UnboundLocalError: local variable 'sresult' referenced before assignment
From other threads (e.g. importing NumPy in Parallel Python) I have seen that it's probably an issue with the communication of pp and numpy, so I have tried these:
updating pp and numpy
reinstalling numpy (I'm using the anaconda package)
running the code directly from the shell (not from spyder)
submitting multiarray (which otherwise imports just fine) directly to the workers with the job (and I'm sure I have the correct syntax ...)
but to no avail. Do you have any other suggestions? Is there a simple workaround other than, erm, "de-numpying" my function (which would be sad, it's a matrix multiplication)? My colleague who uses a more recent version of the same operating system, but otherwise the same python setup, does not seem to have this problem, but since time is an issue updating my system or hijacking his slower computer are both not the best options.
If necessary, feel free to ask for more details & thank you in advance.

Python ImportError: No module named ext

When i running my code I am getting following error saying ImportError: No module named ext
Code sample causing error
import module.model
module.model.dropdb(input)
module.model.createdb(input)
The trace back as follows
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "module/models/__init__.py", line 54, in drop_db
drop_db_with_migrations(quiet)
File "module/models/__init__.py", line 31, in drop_db_with_migrations
from module.app import db
File "module/app.py", line 42, in <module>
app.jinja_env.add_extension('hamlpy.ext.HamlPyExtension')
File "/vagrant-dev/opt/dev_virtualenv/local/lib/python2.7/site-packages/Jinja2
-2.6-py2.7.egg/jinja2/environment.py", line 288, in add_extension
self.extensions.update(load_extensions(self, [extension]))
File "/vagrant-dev/opt/dev_virtualenv/local/lib/python2.7/site-packages/Jinja2
-2.6-py2.7.egg/jinja2/environment.py", line 75, in load_extensions
extension = import_string(extension)
File "/vagrant-dev/opt/dev_virtualenv/local/lib/python2.7/site-packages/Jinja2
-2.6-py2.7.egg/jinja2/utils.py", line 213, in import_string
return getattr(__import__(module, None, None, [obj]), obj)
ImportError: No module named ext
Your problem is in your Traceback:
Traceback (most recent call last):
-- SNIP --
File "module/app.py", line 42, in <module>
app.jinja_env.add_extension('hamlpy.ext.HamlPyExtension')
-- SNIP --
ImportError: No module named ext
Jinja2 uses the dunder import mechanism __import__(some_package_name_string). It's unable to find a subpackage ext in your hamlpy package.