I am looking to call an rPy2 function with multiple input parameters. Here is the R function write.csv that I am trying to use. It has multiple input parameters and I need to specify more than one such parameter.
If I use it without the optional parameter row.names and column.names, it works like this:
r("write.csv")(d,file='myfilename.csv')
For my requirements, I must issue this command with the optional parameters row.names and column.names. So, I tried:
r('write.csv')(d, file='myfilename.csv', row.names=FALSE, column.names=FALSE)
but I got this error message:
File "/home/UserName/test.py", line 12
r("write.csv")(d,file='myfilename.csv',row.names=FALSE, column.names=FALSE)
SyntaxError: keyword can't be an expression
[Finished in 0.0s with exit code 1]
[shell_cmd: python -u "/home/UserName/test.py"]
[dir: /home/UserName]
[path: /home/UserName/bin:/home/UserName/.local/bin:/usr/local/sbin:/usr/local/bin:
.../usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin]
How can I achieve write.csv with row.names=FALSE and column.names=FALSE, in rPy2?
You can use Python's **.
See the note here: http://rpy2.readthedocs.io/en/version_2.8.x/robjects_functions.html#callable
Ony of my mistakes was that I should have replaced . by _, as shown in the docs here:
from rpy2.robjects.packages import importr
base = importr('base')
base.rank(0, na_last = True)
so I would analogously need row_names = TRUE. However, the . in write.csv() still remained, so this only solved part of the question. Ok, so I tried a few things to get an answer:
Generating sample data:
from rpy2.robjects import r, globalenv
from rpy2.robjects import IntVector, DataFrame
d = {'a': IntVector((1,2,3)), 'b': IntVector((4,5,6))}
dataf = DataFrame(d)
Attempts follow - 1. did not work, 2. and 3. did work:
1:
r('write_csv')(x=dataf,file='testing.csv',row_names=False)
Traceback (most recent call last):
File "C:\Users\UserName\FileD\test.py", line 18, in <module>
r('write_csv')(x=dataf,file='testing.csv',row_names=False)
File "C:\Python27\lib\site-packages\rpy2\robjects\__init__.py", line 321, in __call__
res = self.eval(p)
File "C:\Python27\lib\site-packages\rpy2\robjects\functions.py", line 178, in __call__
return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs)
File "C:\Python27\lib\site-packages\rpy2\robjects\functions.py", line 106, in __call__
res = super(Function, self).__call__(*new_args, **new_kwargs)
rpy2.rinterface.RRuntimeError: Error in eval(expr, envir, enclos) : object 'write_csv'
..not found
Error in eval(expr, envir, enclos) : object 'write_csv' not found
2.
r('''
write_csv <- function(x,verbose=FALSE)
write.csv(x,file='testing.csv',row.names=FALSE)
''')
r['write_csv'](dataf)
3.
globalenv['dataf'] = dataf
r("write.csv(dataf,file='testing2.csv',row.names=FALSE)")
I was really hoping attempt 1. would have worked. It seemed I had reproduced the example in the docs base.rank(0, na_last = True), but I think something might have still been missing.
Related
So I installed pyomo, glpk, and ipopt with anaconda,
When I run the example code here: https://pyomo.readthedocs.io/en/stable/contributed_packages/mindtpy.html
from pyomo.environ import *
model = ConcreteModel()
model.x = Var(bounds=(1.0,10.0),initialize=5.0)
model.y = Var(within=Binary)
model.c1 = Constraint(expr=(model.x-3.0)**2 <= 50.0*(1-model.y))
model.c2 = Constraint(expr=model.x*log(model.x)+5.0 <= 50.0*(model.y))
model.objective = Objective(expr=model.x, sense=minimize)
SolverFactory('mindtpy').solve(model, mip_solver='glpk', nlp_solver='ipopt',tee=True)
model.objective.display()
model.display()
model.pprint()
I get the output that the binary variable has apparently become infeasible:
python minlpex.py
INFO: ---Starting MindtPy---
INFO: Original model has 2 constraints (2 nonlinear) and 0 disjunctions, with
2 variables, of which 1 are binary, 0 are integer, and 1 are continuous.
INFO: NLP 1: Solve relaxed integrality
INFO: NLP 1: OBJ: 1.0 LB: 1.0 UB: inf
INFO: ---MindtPy Master Iteration 0---
INFO: MIP 1: Solve master problem.
WARNING: Empty constraint block written in LP format - solver may error
Traceback (most recent call last):
File "minlpex.py", line 13, in <module>
op.SolverFactory('mindtpy').solve(model, mip_solver='glpk', nlp_solver='ipopt',tee=True)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/MindtPy.py", line 370, in solve
MindtPy_iteration_loop(solve_data, config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/iterate.py", line 30, in MindtPy_iteration_loop
handle_master_mip_optimal(master_mip, solve_data, config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/mindtpy/mip_solve.py", line 62, in handle_master_mip_optimal
config)
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/contrib/gdpopt/util.py", line 199, in copy_var_list_values
v_to.set_value(value(v_from, exception=False))
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/core/base/var.py", line 173, in set_value
if valid or self._valid_value(val):
File "/anaconda3/envs/py36/lib/python3.6/site-packages/pyomo/core/base/var.py", line 185, in _valid_value
"domain %s" % (val, type(val), self.domain))
ValueError: Numeric value `0.22709088987977885` (<class 'float'>) is not in domain Binary
So I was a little confused, since this was a code provided, I would not expect it to error like this. So I feel like I'm messing something up or I am missing some required library?
Thanks a lot.
Looks like something must be wrong with the conda pyomo install or ipopt install.
When I reinstalled using pip for ipopt and compiling pyomo from github source everything works fine.
This works:
ss = 'insert into images (file_path) values(?);'
dddd = (('dd1',), ('dd2',))
conn.executemany(ss, dddd)
However this does not:
s = 'insert into images (file_path) values (:v)'
ddddd = ({':v': 'dd11'}, {':v': 'dd22'})
conn.executemany(s, ddddd)
Traceback (most recent call last):
File "/Users/Wes/.virtualenvs/ppyy/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3035, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-31-a999de59f73b>", line 1, in <module>
conn.executemany(s, ddddd)
ProgrammingError: You did not supply a value for binding 1.
I am wondering if it is possible to use named parameters with executemany and, if so, how.
The documentation at section 11.13.3 talks generally about parameters but doesn't discuss the two styles of parameters that are described for other flavors of .executexxx().
I have checked out Python sqlite3 execute with both named and qmark parameters which does not pertain to executemany.
The source shows that execute() simply constructs a one-element list and calls executemany(), so the problem is not with executemany() itself; the same call fails with execute():
>>> conn.execute('SELECT :v', {':v': 42})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
sqlite3.ProgrammingError: You did not supply a value for binding 1.
As shown in the Python documentation, named parameters do not include the colon:
# And this is the named style:
cur.execute("select * from people where name_last=:who and age=:age", {"who": who, "age": age})
So you have to use ddddd = ({'v': 'dd11'}, {'v': 'dd22'}).
The : isn't part of the parameter name.
>>> s = 'insert into images (file_path) values (:v)'
>>> ddddd = ({'v': 'dd11'}, {'v': 'dd22'})
>>> conn.executemany(s, ddddd)
<sqlite3.Cursor object at 0x0000000002C0E500>
>>> conn.execute('select * from images').fetchall()
[(u'dd11',), (u'dd22',)]
I have three variables I want to write in a tab delimited .csv, appending values each time the script iterates over a key value from the dictionary.
Currently the script calls a command, regex the stdout as out then assigns the three defined regex groups to individual variables for writing to .csv labeled first second and third. I get a __exit_ error when I run the below script.
/note I've read up on csv.writer and I'm still confused as to whether I can actually write multiple variables to a row.
Thanks for any help you can provide.
import csv, re, subprocess
for k in myDict:
run_command = "".join(["./aCommand", " -r data -p ", str(k)])
process = subprocess.Popen(run_command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = process.communicate()
errcode = process.returncode
pattern = re.compile('lastwrite|(\d{2}:\d{2}:\d{2})|alert|trust|Value')
grouping = re.compile('(?P<first>.+?)(\n)(?P<second>.+?)([\n]{2})(?P<rest>.+[\n])',
re.MULTILINE | re.DOTALL)
if pattern.findall(out):
match = re.search(grouping, out)
first = match.group('first')
second = match.group('second')
rest = match.group('rest')
with csv.writer(open(FILE, 'a')) as f:
writer = csv.writer(f, delimiter='\t')
writer.writerow(first, second, rest)
Edit: Requested in the comments to post entire traceback, note the line listed in traceback will not match the above code as this is not the entire script.
Traceback (most recent call last):
File "/mydir/pyrr.py", line 60, in <module>
run_rip()
File "/mydir/pyrr.py", line 55, in run_rip
with csv.writer(open('/mydir/ntuser.csv', 'a')) as f:
AttributeError: __exit__
Answer: Using the below comment I was able to write it as follows.
f = csv.writer(open('/mydir/ntuser.csv', 'a'),
dialect=csv.excel,
delimiter='\t')
f.writerow((first, second, rest))
The error is pretty clear. The with statement takes a context manager, i.e., an object with an __enter__ and an __exit__ method, such as the object returned by open. csv.writer does not provide such an object. You are also attempting to create the writer twice:
with open(FILE, 'a') as f:
writer = csv.writer(f, delimiter='\t')
writer.writerow(first, second, rest)
The with ... f: is like a try...except...finally that guarantees that f is closed no matter what happens, except you don't have to type it out. open(...) returns a context manager whose __exit__ method is called in that finally block you don't have to type. That is what your exception was complaining about. open returns an object that has __exit__ properly defined and can therefore handle normal exit and exceptions in the with block. csv.writer does not have such a method, so you can't use it in the with statement itself. You have to do it in the with block following the statement, as I've shown you.
I have an app running online under web2py. Now, i am adding names.yml file which i need to call in my controller file (default.py) on web2py server. where should I keep the .yml/.yaml files. Currently I have kept them in views with default/names.yml but when I call it in default.py like:
dicttagger = DictionaryTagger([ 'default/names.yml', 'default/surname.yml'])
i get no such file error.
Also tried below:
dicttagger = DictionaryTagger([ 'views/default/names.yml', 'views/default/surname.yml'])
same error
class snapshot as under:
class DictionaryTagger(object):
def __init__(self, dictionary_paths):
files = [open(path, 'r') for path in dictionary_paths]
dictionaries = [yaml.load(dict_file) for dict_file in files]
map(lambda x: x.close(), files)
Any suggestions as how to do this or am I making mistake of using yaml/yml file in we2py and it doesn't work in web2py app hosted online?
question 2
thank you. it resolved an error but I am not sure how to add nltk.download() into my hosted app. I keep getting the below error. Can you pls have a look:
Traceback
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Traceback (most recent call last):
File "/home/prakashsukhwal/web2py/gluon/restricted.py", line 220, in restricted
exec ccode in environment
File "/home/prakashsukhwal/web2py/applications/Sensiva/controllers/default.py", line 4, in
nltk.download()
File "/usr/local/lib/python2.7/dist-packages/nltk/downloader.py", line 644, in download
self._interactive_download()
File "/usr/local/lib/python2.7/dist-packages/nltk/downloader.py", line 958, in _interactive_download
DownloaderShell(self).run()
File "/usr/local/lib/python2.7/dist-packages/nltk/downloader.py", line 981, in run
user_input = raw_input('Downloader> ').strip()
EOFError: EOF when reading a line
Error snapshot help
(EOF when reading a line)
inspect attributes
Frames
File /home/prakashsukhwal/web2py/gluon/restricted.py in restricted at line 220 code arguments variables
File /home/prakashsukhwal/web2py/applications/Sensiva/controllers/default.py in at line 4 code arguments variables
File /usr/local/lib/python2.7/dist-packages/nltk/downloader.py in download at line 644 code arguments variables
File /usr/local/lib/python2.7/dist-packages/nltk/downloader.py in _interactive_download at line 958 code arguments variables
File /usr/local/lib/python2.7/dist-packages/nltk/downloader.py in run at line 981 code arguments variables
Function argument list
(self=)
Code listing
def run(self):
print 'NLTK Downloader'
while True:
self._simple_interactive_menu(
'd) Download', 'l) List', ' u) Update', 'c) Config', 'h) Help', 'q) Quit')
user_input = raw_input('Downloader> ').strip()
if not user_input: print; continue
command = user_input.lower().split()[0]
args = user_input.split()[1:]
try:
Variables
user_input undefined
builtinraw_input
).strip undefined
Context
You can store the files wherever you want, but if you're using the Python open function, you'll need to give it full paths, not paths relative to the web2py application folder. Instead, try:
import os
dicttagger = DictionaryTagger([os.path.join(request.folder, 'views',
'default', 'names.yml'),
...])
I am using the sklearn 0.14 module in Python to create a decision tree. I was hoping to use the OneHotEncoder to convert some features into categorical features. According to the documentation, I should be able to provide an array of indices to indicate which features should be converted. However, trying the following code:
xs = [[64, 15230], [3, 67673], [16, 43678]]
encoder = preprocessing.OneHotEncoder(n_values='auto', categorical_features=[1], dtype=numpy.integer)
encoder.fit(xs)
I receive the following error:
Traceback (most recent call last): File
"C:\Users\sara\Documents\Shipping
Project\PythonSandbox\CarrierDecisionTree.py", line 35, in <module>
encoder.fit(xs) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
892, in fit
self.fit_transform(X) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
944, in fit_transform
self.categorical_features, copy=True) File "C:\Python27\lib\site-packages\sklearn\preprocessing\data.py", line
795, in _transform_selected
return sparse.hstack((X_sel, X_not_sel)) File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 417,
in hstack
return bmat([blocks], format=format, dtype=dtype) File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 532,
in bmat
dtype = upcast( *tuple([A.dtype for A in blocks[block_mask]]) ) File "C:\Python27\lib\site-packages\scipy\sparse\sputils.py", line 53,
in upcast
raise TypeError('no supported conversion for types: %r' % (args,)) TypeError: no supported conversion for types: (dtype('int32'),
dtype('S6'))
If instead, I provide the array [0, 1] to categorical_features, it works correctly and converts both features properly. The same correct behavior occurs with using 'all' to categorical_features. However, I only want the second feature converted and not the first. I understand I could do this manually by converting one feature at a time, but I was hoping to use all the beauty of OneHotEncoder as I will be using many more features later on.
Posting as an answer, for the record:
TypeError: no supported conversion for types: (dtype('int32'), dtype('S6'))
means something in the true xs (not the one shown in the code snippet) is a string: dtype('S6') is NumPy's length-six string type.