I'm getting a simple but confusing error when trying to create a login worker
via python.
Here's the error i'm getting.
Traceback (most recent call last):
File "stratixlogin.py", line 87, in <module>
main()
File "stratixlogin.py", line 78, in main
login_worker()
File "stratixlogin.py", line 51, in login_worker
data = f.read()
ValueError: Mixing iteration and read methods would lose datanter code
Here is where the Error is occuring:
with open("global_users.txt", "r") as f:
for line in f:
data = f.read()
if data == username_ask:
print(G+"Success!")
password_ask = raw_input(O+"Password:"+W+" ")
with open("global_passwords.txt", "r") as f:
for line in f:
data = f.read()
if data == password_ask:
print(G+"Success!")
else:
print(R+"Incorrect Password!")
else:
print(R+"No Users Found!")
I am not sure what the error is here, But i am confused on how to fix this. Any Ideas?
You can't mix iterating through the lines of the file (the for loop) and read().
This is enough:
with open("global_users.txt", "r") as f:
for data in f:
if data == username_ask:
print(G+"Success!")
password_ask = raw_input(O+"Password:"+W+" ")
with open("global_passwords.txt", "r") as f:
for line in f:
data = f.read()
if data == password_ask:
print(G+"Success!")
else:
print(R+"Incorrect Password!")
else:
print(R+"No Users Found!")
Related
I am attempting to visualise a KDE plot in Seaborn, but am encountering an error on entering data.
The data is a set of scores ranging from 1-13 and is in the form of a numpy array.
Below is the section of code I'm using.
query_CNM = 'SELECT SCORE from CNMATCH LIMIT 2000'
df = pd.read_sql(query_CNM, conn, index_col = None)
yy = np.array(df)
plot = sns.kdeplot(yy)
Below is the full error that I'm receiving.
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1758, in <module>
main()
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1752, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1147, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Users/uni/Desktop/Proof_Of_Concept/PYQTKDE.py", line 66, in <module>
plot = sns.kdeplot(yy)
File "/Users/uni/.conda/envs/fing.py/lib/python2.7/site-packages/seaborn/distributions.py", line 664, in kdeplot
x, y = data.T
ValueError: need more than 1 value to unpack
I can't seem to find exactly how the data needs to be formatted for sea-born in order to fit a KDE, if any insights can be provided on this it would be greatly appreciated.
I see following error when executing this python code. What is issue here?
I have used "sys.stdout.close()" still I see these errors.
#! /usr/bin/python
import sys
a = [ 10, 12, 13, 14]
sys.stdout=open("file.txt","w")
print("++++++++")
print("***xyz***")
print("++++++++")
sys.stdout.close()
for i in a:
print i
Output:
Traceback (most recent call last):
File "./test3.py", line 10, in <module>
print i
ValueError: I/O operation on closed file`
You are trying to write to stdout (your file) after closing it. At line 8 you close the file, and at line 10 you call print.
If you want to write the list a to the file you should close it after the for loop.
Consider using with open because you don't have to worry about closing it. If your list needs to be a list then consider pickling it instead of writing it to a file. Pickling serializes your data.
#!python3
# import module
from os import system
import pickle
# clear the screan
system('cls')
a = [ 10, 12, 13, 14]
# write a list to file, but it has to be written as a string
with open('file.txt', 'w') as wf:
wf.write(str(a))
# when you open your file up, the data is a string
with open('file.txt', 'r') as fp:
for item in fp:
print(item)
print(type(item))
# if you want to retain your data as a list, then pickle it
output = open('file.pkl', 'wb')
pickle.dump(a, output)
output.close()
# open up a pickled file
pkl_file = open('file.pkl', 'rb')
data = pickle.load(pkl_file)
print(data)
print(type(data))
pkl_file.close()
I have began to experiment with Python and NLTK.
I am experiencing a lengthy error message which I cannot find a solution to and would appreciate any insights you may have.
import nltk,csv,numpy
from nltk import sent_tokenize, word_tokenize, pos_tag
reader = csv.reader(open('Medium_Edited.csv', 'rU'), delimiter= ",",quotechar='|')
tokenData = nltk.word_tokenize(reader)
I'm running Python 2.7 and the latest nltk package on OSX Yosemite.
These are also two lines of code I attempted with no difference in results:
with open("Medium_Edited.csv", "rU") as csvfile:
tokenData = nltk.word_tokenize(reader)
These are the error messages I see:
Traceback (most recent call last):
File "nltk_text.py", line 11, in <module>
tokenData = nltk.word_tokenize(reader)
File "/Library/Python/2.7/site-packages/nltk/tokenize/__init__.py", line 101, in word_tokenize
return [token for sent in sent_tokenize(text, language)
File "/Library/Python/2.7/site-packages/nltk/tokenize/__init__.py", line 86, in sent_tokenize
return tokenizer.tokenize(text)
File "/Library/Python/2.7/site-packages/nltk/tokenize/punkt.py", line 1226, in tokenize
return list(self.sentences_from_text(text, realign_boundaries))
File "/Library/Python/2.7/site-packages/nltk/tokenize/punkt.py", line 1274, in sentences_from_text
return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
File "/Library/Python/2.7/site-packages/nltk/tokenize/punkt.py", line 1265, in span_tokenize
return [(sl.start, sl.stop) for sl in slices]
File "/Library/Python/2.7/site-packages/nltk/tokenize/punkt.py", line 1304, in _realign_boundaries
for sl1, sl2 in _pair_iter(slices):
File "/Library/Python/2.7/site-packages/nltk/tokenize/punkt.py", line 310, in _pair_iter
prev = next(it)
File "/Library/Python/2.7/site-packages/nltk/tokenize/punkt.py", line 1278, in _slices_from_text
for match in self._lang_vars.period_context_re().finditer(text):
TypeError: expected string or buffer
Thanks in advance
As you can read in the Python csv documentation, csv.reader "returns a reader object which will iterate over lines in the given csvfile". In other words, if you want to tokenize the text in your csv file, you will have to go through the lines and the fields in those lines:
for line in reader:
for field in line:
tokens = word_tokenize(field)
Also, when you import word_tokenize at the beginning of your script, you should call it as word_tokenize, and not as nltk.word_tokenize. This also means you can drop the import nltk statement.
It is giving error - expected string or buffer because you have forgotten to add str as
tokenData = nltk.word_tokenize(str(reader))
i am working on a machine leaning project and here is my code
import csv
import numpy as np
import string
from sklearn.ensemble import RandomForestRegressor
def main():
alchemy_category_set = {}
#read train data
train = []
target = []
with open("/media/halawa/93B77F681EC1B4D2/GUC/Semster 8/CSEN 1022 Machine Learning/2/train.csv", 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
reader.next() #skip the header
for row in reader:
line = row[3:len(row)-1]
train.append(line)
target.append(row[len(row)-1])
if row[3] not in alchemy_category_set:
alchemy_category_set[row[3]] = len(alchemy_category_set)
#read valid data
valid = []
valid_index = []
with open("/media/halawa/93B77F681EC1B4D2/GUC/Semster 8/CSEN 1022 Machine Learning/2/test.csv", 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
reader.next() #skip the header
for row in reader:
line = row[3:len(row)]
valid.append(line)
valid_index.append(row[1])
if row[3] not in alchemy_category_set:
alchemy_category_set[row[3]] = len(alchemy_category_set)
if __name__=="__main__":
main()
the reading of the test.csv is not working although it is working with the traing,csv , when i run it gives me
/usr/bin/python2.7 /home/halawa/PycharmProjects/ML/train.py
Traceback (most recent call last):
File "/home/halawa/PycharmProjects/ML/train.py", line 68, in <module>
main()
File "/home/halawa/PycharmProjects/ML/train.py", line 26, in main
reader.next() #skip the header
StopIteration
Process finished with exit code 1
the problem is with reading the csv file , any help would be appreciated .
I think you just forgot indentation after opening test file. Namely, after with open line the next 8 lines (each of these lines) should be indented with 2 more space .
By the way, it is highly recommended to indent with 4 spaces, not just 2.
And it should be consistent in your file
I'm currently creating a spreadsheet using xlwt and trying to export it out as an HttpResponse in django for a user to download. My code looks like this:
response = HttpResponse(mimetype = "application/vnd.ms-excel")
response['Content-Disposition'] = 'attachment; filename = %s +".xls"' % u'Zinnia_Entries'
work_book.save(response)
return response
Which seems to be the right way to do it, but I'm getting a:
Traceback (most recent call last):
File "C:\dev\workspace-warranty\imcom\imcom\wsgiserver.py", line 1233, in communicate
req.respond()
File "C:\dev\workspace-warranty\imcom\imcom\wsgiserver.py", line 745, in respond
self.server.gateway(self).respond()
File "C:\dev\workspace-warranty\imcom\imcom\wsgiserver.py", line 1927, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "C:\dev\workspace-warranty\3rdparty\django\core\servers\basehttp.py", line 674, in __call__
return self.application(environ, start_response)
File "C:\dev\workspace-warranty\3rdparty\django\core\handlers\wsgi.py", line 252, in __call__
response = middleware_method(request, response)
File "C:\dev\workspace-warranty\imcom\imcom\seo_mod\middleware.py", line 33, in process_response
response.content = strip_spaces_between_tags(response.content.strip())
File "C:\dev\workspace-warranty\3rdparty\django\utils\functional.py", line 259, in wrapper
return func(*args, **kwargs)
File "C:\dev\workspace-warranty\3rdparty\django\utils\html.py", line 89, in strip_spaces_between_tags
return re.sub(r'>\s+<', '><', force_unicode(value))
File "C:\dev\workspace-warranty\3rdparty\django\utils\encoding.py", line 88, in force_unicode
raise DjangoUnicodeDecodeError(s, *e.args)
DjangoUnicodeDecodeError: 'utf8' codec can't decode byte 0xd0 in position 0: invalid continuation byte. You passed in
(I left off the rest because I get a really long line of this \xd0\xcf\x11\xe0\xa1\xb1\x1a\xe1\x00 kind of stuff)
Do you guys have any ideas on something that could be wrong with this? Is is it because some of my write values look like this:
work_sheet.write(r,#,information) where information isn't cast to unicode?
response['Content-Disposition'] = 'attachment; filename = %s +".xls"' % u'Zinnia_Entries'
should just be
response['Content-Disposition'] = 'attachment; filename = %s.xls' % u'Zinnia_Entries'
without quotes around .xls otherwise the output will be
u'attachment; filename = Zinnia_Entries +".xls"'
So try changing that.
But also check out this answer. It has a really helpful little function for outputing xls files.
django excel xlwt
Solved the problem. Apparently someone had put some funky middleware stuff in that was hacking apart and appending and adding, ect. ect. to the file. When it shouldn't.
Anyway, with it gone the file exports perfectly.
#Storm - Thank you for all the help!