name from Orange is not defined - python-2.7

I've setup Orange and tried to execute this code in PythonWin
And got error on 2nd line
Was my setup of Orange incomplete or it's something else?
>>> from Orange.data import *
>>> color = DiscreteVariable("color", values=["orange", "green", "yellow"])
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
NameError: name 'DiscreteVariable' is not defined

I'm not sure what the guy in the blog post is doing, or maybe there are some other steps that he explained in previous blog posts, but this code 'as is' is not going to work.
I searched the source code for Orange, and DiscreteVariable isn't mentioned anywhere, not as class, not as regular word, nothing.
What I did find however is
Discrete = core.EnumVariable
in Orange/feature/__init__.py. As you can see this points to core.EnumVariable, which appears, looking at it's usage:
orange.EnumVariable('color', values = ["green", "red"])\
to be the same as DiscreteVariable in your link.
So I suggest you use from Orange.feature import Discrete instead and use that.

Related

python sqlite3 .executemany() with named placeholders?

This works:
ss = 'insert into images (file_path) values(?);'
dddd = (('dd1',), ('dd2',))
conn.executemany(ss, dddd)
However this does not:
s = 'insert into images (file_path) values (:v)'
ddddd = ({':v': 'dd11'}, {':v': 'dd22'})
conn.executemany(s, ddddd)
Traceback (most recent call last):
File "/Users/Wes/.virtualenvs/ppyy/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3035, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-31-a999de59f73b>", line 1, in <module>
conn.executemany(s, ddddd)
ProgrammingError: You did not supply a value for binding 1.
I am wondering if it is possible to use named parameters with executemany and, if so, how.
The documentation at section 11.13.3 talks generally about parameters but doesn't discuss the two styles of parameters that are described for other flavors of .executexxx().
I have checked out Python sqlite3 execute with both named and qmark parameters which does not pertain to executemany.
The source shows that execute() simply constructs a one-element list and calls executemany(), so the problem is not with executemany() itself; the same call fails with execute():
>>> conn.execute('SELECT :v', {':v': 42})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
sqlite3.ProgrammingError: You did not supply a value for binding 1.
As shown in the Python documentation, named parameters do not include the colon:
# And this is the named style:
cur.execute("select * from people where name_last=:who and age=:age", {"who": who, "age": age})
So you have to use ddddd = ({'v': 'dd11'}, {'v': 'dd22'}).
The : isn't part of the parameter name.
>>> s = 'insert into images (file_path) values (:v)'
>>> ddddd = ({'v': 'dd11'}, {'v': 'dd22'})
>>> conn.executemany(s, ddddd)
<sqlite3.Cursor object at 0x0000000002C0E500>
>>> conn.execute('select * from images').fetchall()
[(u'dd11',), (u'dd22',)]

retriving data saved under HDF5 group as Carray

I am new to HDF5 file format and I have a data(images) saved in HDF5 format. The images are saved undere a group called 'data' which is under the root group as Carrays. what I want to do is to retrive a slice of the saved images. for example the first 400 or somthing like that. The following is what I did.
h5f = h5py.File('images.h5f', 'r')
image_grp= h5f['/data/'] #the image group (data) is opened
print(image_grp[0:400])
but I am getting the following error
Traceback (most recent call last):
File "fgf.py", line 32, in <module>
print(image_grp[0:40])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
(/feedstock_root/build_artefacts/h5py_1496410723014/work/h5py-2.7.0/h5py/_objects.c:2846)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
(/feedstock_root/build_artefacts/h5py_1496410723014/work/h5py
2.7.0/h5py/_objects.c:2804)
File "/..../python2.7/site-packages/h5py/_hl/group.py", line 169, in
__getitem__oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "/..../python2.7/site-packages/h5py/_hl/base.py", line 133, in _e name = name.encode('ascii')
AttributeError: 'slice' object has no attribute 'encode'
I am not sure why I am getting this error but I am not even sure if I can slice the images which are saved as individual datasets.
I know this is an old question, but it is the first hit when searching for 'slice' object has no attribute 'encode' and it has no solution.
The error happens because the "group" is a group which does not have the encoding attribute. You are looking for the dataset element.
You need to find/know the key for the item that contains the dataset.
One suggestion is to list all keys in the group, and then guess which one it is:
print(list(image_grp.keys()))
This will give you the keys in the group.
A common case is that the first element is the image, so you can do this:
image_grp= h5f['/data/']
image= image_grp(image_grp.keys[0])
print(image[0:400])
yesterday I had a similar error and wrote this little piece of code to take my desired slice of h5py file.
import h5py
def h5py_slice(h5py_file, begin_index, end_index):
slice_list = []
with h5py.File(h5py_file, 'r') as f:
for i in range(begin_index, end_index):
slice_list.append(f[str(i)][...])
return slice_list
and it can be used like
the_desired_slice_list = h5py_slice('images.h5f', 0, 400)

praw.errors.Forbidden: HTTP error when using Reddit get_flair_list

I am trying to get the comments for each Reddit post.
This is the way I am using to get flair list:
import praw
import webbrowser
r = praw.Reddit('OAuth testing example by u/_Daimon_ ver 0.1 see '
'https://praw.readthedocs.org/en/latest/'
'pages/oauth.html for source')
r.set_oauth_app_info(client_id='[client id]',
client_secret='[client secret]',
redirect_uri='http://localhost/authorize_callback')
url = r.get_authorize_url('uniqueKey', 'modflair', True)
webbrowser.open(url)
Then I got the code from the returned url, and I put the code in the access information, like this:
access_information = r.get_access_information('[returned code]')
Then when I am trying to call get_fliar_list() just like PRAW tutorial, like this:
item = next(r.get_subreddit('travel').get_flair_list())
It gives me an error, showing:
Traceback (most recent call last):
File "", line 1, in
File "/Library/Python/2.7/site-packages/praw-3.4.0-py2.7.egg/praw/init.py", line 565, in get_content
page_data = self.request_json(url, params=params)
File "", line 2, in request_json
File "/Library/Python/2.7/site-packages/praw-3.4.0-py2.7.egg/praw/decorators.py", line 116, in raise_api_exceptions
return_value = function(*args, **kwargs)
File "/Library/Python/2.7/site-packages/praw-3.4.0-py2.7.egg/praw/init.py", line 620, in request_json
retry_on_error=retry_on_error)
File "/Library/Python/2.7/site-packages/praw-3.4.0-py2.7.egg/praw/init.py", line 452, in _request
_raise_response_exceptions(response)
File "/Library/Python/2.7/site-packages/praw-3.4.0-py2.7.egg/praw/internal.py", line 208, in _raise_response_exceptions
raise Forbidden(_raw=response)
praw.errors.Forbidden: HTTP error
Here's the link of that PRAW tutorial: PRAW tutorial
Do you know how to solve this problem? How can I call get_flair_list() to get all the comments of a Reddit post?
There are a few things potentially going on here.
The first issue (And the most likely) Is that you are logging in wrong.
r = praw.Reddit('OAuth testing example by u/_Daimon_ ver 0.1 see '
'https://praw.readthedocs.org/en/latest/'
'pages/oauth.html for source')
DONT DO THIS, EVER
Even if the syntax in this command was correct (you dont have the commas), this makes your code INCREDIBLY hard to read. The most readable way is to have r = praw.Reddit('OAuth-testing') (the OAuth-testing bit can be whatever you want, as long as it is the same as in your praw.ini file.), then setup your praw.ini file as such:
[DEFAULT]
# A boolean to indicate whether or not to check for package updates.
check_for_updates=True
# Object to kind mappings
comment_kind=t1
message_kind=t4
redditor_kind=t2
submission_kind=t3
subreddit_kind=t5
# The URL prefix for OAuth-related requests.
oauth_url=https://oauth.reddit.com
# The URL prefix for regular requests.
reddit_url=https://www.reddit.com
# The URL prefix for short URLs.
short_url=https://redd.it
[OAuth-testing]
user_agent=USER-AGENT-HERE
username=REDDIT-ACCOUNT-USERNAME
password=REDDIT-ACCOUNT-PASSWORD
client_id=REDDIT-APP-CLIENT-ID
client_secret=REDDIT-APP-CLIENT-SECRET
Just as an additional note, get_flair_list() also requires moderator access, as documented here
Also, you ask at the bottom:
How can I call get_flair_list() to get all the comments of a Reddit post?
This would not be how you get all the comments of a post, if that is what you want to do you can read this tutorial in the PRAW docs.
If you have any further questions don't hesitate to comment on this answer and I or somebody else can answer it!

Mailgun Talon: Signature extraction example throwing error

I installed mailgun/talon on GCE and was trying out the example in the README section, but it threw the following error at me:
>>> from talon import signature
>>> message = """Thanks Sasha, I can't go any higher and is why I limited it to the
... homepage.
...
... John Doe
... via mobile"""
>>> message
"Thanks Sasha, I can't go any higher and is why I limited it to the\nhomepage.\n\nJohn Doe\nvia mobile"
>>> text,signtr = signature.extract(message, sender='john.doe#example.com')
ERROR:talon.signature.extraction:ERROR when extracting signature with classifiers
Traceback (most recent call last):
File "talon/signature/extraction.py", line 57, in extract
markers = _mark_lines(lines, sender)
File "talon/signature/extraction.py", line 99, in _mark_lines
elif is_signature_line(line, sender, EXTRACTOR):
File "talon/signature/extraction.py", line 40, in is_signature_line
return classifier.decisionFunc(data, 0) > 0
AttributeError: 'NoneType' object has no attribute 'decisionFunc'
Do I need to train the model somehow (this signature seems to be the ML example)? I installed it using pip.
If you want to use signature parsing with classifiers you just need to call talon.init() before using the lib - it loads trained classifiers. Other methods like talon.signature.bruteforce.extract_signature() or talon.quotations.extract_from() don't require classifiers. Here's a full code sample:
import talon
# don't forget to init the library first
# it loads machine learning classifiers
talon.init()
from talon import signature
message = """Thanks Sasha, I can't go any higher and is why I limited it to the
homepage.
John Doe
via mobile"""
text, signature = signature.extract(message, sender='john.doe#example.com')
# text == "Thanks Sasha, I can't go any higher and is why I limited it to the\nhomepage."
# signature == "John Doe\nvia mobile"

ValueError: need more than 1 value to unpack - python graph core

I was planning to try to use this code in order to do a critical path analysis.
When running this code I got the following error but I have no idea what it means (because I don't now how the code works).
Traceback (most recent call last): File
"/Users/PeterVanvoorden/Desktop/test.py", line 22, in
G.add_edge('A','B',1) File "/Library/Python/2.7/site-packages/python_graph_core-1.8.2-py2.7.egg/pygraph/classes/digraph.py",
line 161, in add_edge
u, v = edge ValueError: need more than 1 value to unpack
# Copyright (c) 2007-2008 Pedro Matiello <pmatiello#gmail.com>
# License: MIT (see COPYING file)
import sys
sys.path.append('..')
import pygraph
from pygraph.classes.digraph import digraph
from pygraph.algorithms.critical import transitive_edges, critical_path
#demo of the critical path algorithm and the transitivity detection algorithm
G = digraph()
G.add_node('A')
G.add_node('B')
G.add_node('C')
G.add_node('D')
G.add_node('E')
G.add_node('F')
G.add_edge('A','B',1)
G.add_edge('A','C',2)
G.add_edge('B','C',10)
G.add_edge('B','D',2)
G.add_edge('B','E',8)
G.add_edge('C','D',7)
G.add_edge('C','E',3)
G.add_edge('E','D',1)
G.add_edge('D','F',3)
G.add_edge('E','F',1)
#add this edge to add a cycle
#G.add_edge('E','A',1)
print transitive_edges(G)
print critical_path(G)
I know it is kind of stupid just to copy code without understanding it but I thought I'd first try the example code in order to see if the package is working but apparently it's not working. Now I wonder if it's just because of a little mistake in the example code or if it's a more fundamental problem.
I peeked at the source code for this and see that add_edge is trying to unpack the first positional argument as a 2-tuple.
If you change these lines:
G.add_edge('A','B',1)
G.add_edge('A','C',2)
...
to:
G.add_edge(('A', 'B'), 1) # note the extra parens
G.add_edge(('A', 'C'), 2)
...
it should work. However, I have not used pygraph before so this may still not produce the desired results.