I am trying to read tweets from a text file from a URL
http://rasinsrv07.cstcis.cti.depaul.edu/CSC455/assignment5.txt
Tweets in the file are listed in a single line (there are no line breaks) and punctuated by “EndOfTweet” string.
I am reading the file using the following code:
import urllib2
wfd = urllib2.urlopen('http://rasinsrv07.cstcis.cti.depaul.edu/CSC455/assignment5.txt')
data = wfd.read()
I understand that I have to use split on "EndOfTweet" in order to seperate the lines, but since there is only one line, I do not understand how to loop through the file and separate each line.
for line in data:
line = data.split('EndOfTweet')
You're so close!
by the time you've called wfd.read(), data will contain the raw text of that file. The normal way to loop over a file is to call something like for line in data, which is just looking for newlines to split the data on. In this case, your data doesn't contain the normal newline terminator. Instead, he's using the text EndOfTweet to separate your lines. Here's what you should have done:
import urllib2
import json
wfd = urllib2.urlopen('http://rasinsrv07.cstcis.cti.depaul.edu/CSC455/assignment5.txt')
data = wfd.read()
for line in data.split('EndOfTweet'):
# From here, line will contain a single tweet. It appears this line is a JSON parsable structure.
decoded_line = json.loads(line)
# Now, lets print out the text of the tweet to show we can
print decoded_line.get(u'text')
Related
I have created a script which will give you the match rows between the two files. Post that, I am returning the output file to a function, which will be used the file as input to create pivot using pandas.
But somehow, something seems to be wrong, below is the code snippet
def CreateSummary(file):
out_file = file
file_df = pd.read_csv(out_file) ## This function is appending NULL Bytes at
the end of the file
#print file_df.head(2)
The above code is giving me the error as
ValueError: No columns to parse from file
Tried another approach:
file_df = pd.read_csv(out_file,delim_whitespace=True,engine='python')
##This gives me error as
_csv.Error: line contains NULL byte
Any suggestions and criticism is highly appreciated.
I am trying to build a tool that can convert .csv files into .yaml files for further use. I found a handy bit of code that does the job nicely from the link below:
Convert CSV to YAML, with Unicode?
which states that the line will take the dict created by opening a .csv file and dump it to a .yaml file:
out_file.write(ry.safe_dump(dict_example,allow_unicode=True))
However, one small kink I have noticed is that when it is run once, the generated .yaml file is typically incomplete by a line or two. In order to have the .csv file exhaustively read through to create a complete .yaml file, the code must be run two or even three times. Does anybody know why this could be?
UPDATE
Per request, here is the code I use to parse my .csv file, which is two columns long (with a string in the first column and a list of two strings in the second column), and will typically be 50 rows long (or maybe more). Also note that it designed to remove any '\n' or spaces that could potentially cause problems later on in the code.
csv_contents={}
with open("example1.csv", "rU") as csvfile:
green= csv.reader(csvfile, dialect= 'excel')
for line in green:
candidate_number= line[0]
first_sequence= line[1].replace(' ','').replace('\r','').replace('\n','')
second_sequence= line[2].replace(' ','').replace('\r','').replace('\n','')
csv_contents[candidate_number]= [first_sequence, second_sequence]
csv_contents.pop('Header name', None)
Ultimately, it is not that important that I maintain the order of the rows from the original dict, just that all the information within the rows is properly structured.
I am not sure what would cause could be but you might be running out of memory as you create the YAML document in memory first and then write it out. It is much better to directly stream it out.
You should also note that the code in the question you link to, doesn't preserve the order of the original columns, something easily circumvented by using round_trip_dump instead of safe_dump.
You probably want to make a top-level sequence (list) as in the desired output of the linked question, with each element being a mapping (dict).
The following parses the CSV, taking the first line as keys for mappings created for each following line:
import sys
import csv
import ruamel.yaml as ry
import dateutil.parser # pip install python-dateutil
def process_line(line):
"""convert lines, trying, int, float, date"""
ret_val = []
for elem in line:
try:
res = int(elem)
ret_val.append(res)
continue
except ValueError:
pass
try:
res = float(elem)
ret_val.append(res)
continue
except ValueError:
pass
try:
res = dateutil.parser.parse(elem)
ret_val.append(res)
continue
except ValueError:
pass
ret_val.append(elem.strip())
return ret_val
csv_file_name = 'xyz.csv'
data = []
header = None
with open(csv_file_name) as inf:
for line in csv.reader(inf):
d = process_line(line)
if header is None:
header = d
continue
data.append(ry.comments.CommentedMap(zip(header, d)))
ry.round_trip_dump(data, sys.stdout, allow_unicode=True)
with input xyz.csv:
id, title_english, title_russian
1, A Title in English, Название на русском
2, Another Title, Другой Название
this generates:
- id: 1
title_english: A Title in English
title_russian: Название на русском
- id: 2
title_english: Another Title
title_russian: Другой Название
The process_line is just some sugar that tries to convert strings in the CSV file to more useful types and strings without leading spaces (resulting in far less quotes in your output YAML file).
I have tested the above on files with 1000 rows, without any problems (I won't post the output though).
The above was done using Python 3 as well as Python 2.7, starting with a UTF-8 encoded file xyz.csv. If you are using Python 2, you can try unicodecsv if you need to handle Unicode input and things don't work out as well as they did for me.
and good day fellow developers. I was wondering if say i would like to append every thing on list to a text file but. i want it to look like this
list = ['something','foo','foooo','bar','bur','baar']
#the list
THE NORMAL FILE
this
is
the
text
file
:D
AND WHAT I WOULD LIKE TO DO
this something
is foo
the foooo
text bar
file bur
:D baar
This can be accomplished by reading the original file's contents and appending the added words to each line
example:
# changed the name to list_obj to prevent overriding builtin 'list'
list_obj = ['something','foo','foooo','bar','bur','baar']
path_to_file = "a path name.txt"
# r+ to read and write to/from the file
with open(path_to_file, "r+") as fileobj:
# read all lines and only include lines that have something written
lines = [x for x in fileobj.readlines() if x.strip()]
# after reading reset the file position
fileobj.seek(0)
# iterate over the lines and words to add
for line, word in zip(lines, list_obj):
# create each new line with the added words
new_line = "%s %s\n\n" % (line.rstrip(), word)
# write the lines to the file
fileobj.write(new_line)
The file look like a series of lines with IDs:
aaaa
aass
asdd
adfg
aaaa
I'd like to get in a new file the ID and its occurrence in the old file as the form:
aaaa 2
asdd 1
aass 1
adfg 1
With the 2 element separated by tab.
The code i have print what i want but doesn't write in a new file:
with open("Only1ID.txt", "r") as file:
file = [item.lower().replace("\n", "") for item in file.readlines()]
for item in sorted(set(file)):
print item.title(), file.count(item)
As you use Python 2, the simplest approach to convert your console output to file output is by using the print chevron (>>) syntax which redirects the output to any file-like object:
with open("filename", "w") as f: # open a file in write mode
print >> f, "some data" # print 'into the file'
Your code could look like this after simply adding another open to open the output file and adding the chevron to your print statement:
with open("Only1ID.txt", "r") as file, open("output.txt", "w") as out_file:
file = [item.lower().replace("\n", "") for item in file.readlines()]
for item in sorted(set(file)):
print >> out_file item.title(), file.count(item)
However, your code has a few other more or less bad things which one should not do or could improve:
Do not use the same variable name file for both the file object returned by open and your processed list of strings. This is confusing, just use two different names.
You can directly iterate over the file object, which works like a generator that returns the file's lines as strings. Generators process requests for the next element just in time, that means it does not first load the whole file into your memory like file.readlines() and processes them afterwards, but only reads and stores one line at a time, whenever the next line is needed. That way you improve the code's performance and resource efficiency.
If you write a list comprehension, but you don't need its result necessarily as list because you simply want to iterate over it using a for loop, it's more efficient to use a generator expression (same effect as the file object's line generator described above). The only syntactical difference between a list comprehension and a generator expression are the brackets. Replace [...] with (...) and you have a generator. The only downside of a generator is that you neither can find out its length, nor can you access items directly using an index. As you don't need any of these features, the generator is fine here.
There is a simpler way to remove trailing newline characters from a line: line.rstrip() removes all trailing whitespaces. If you want to keep e.g. spaces, but only want the newline to be removed, pass that character as argument: line.rstrip("\n").
However, it could possibly be even easier and faster to just not add another implicit line break during the print call instead of removing it first to have it re-added later. You would suppress the line break of print in Python 2 by simply adding a comma at the end of the statement:
print >> out_file item.title(), file.count(item),
There is a type Counter to count occurrences of elements in a collection, which is faster and easier than writing it yourself, because you don't need the additional count() call for every element. The Counter behaves mostly like a dictionary with your items as keys and their count as values. Simply import it from the collections module and use it like this:
from collections import Counter
c = Counter(lines)
for item in c:
print item, c[item]
With all those suggestions (except the one not to remove the line breaks) applied and the variables renamed to something more clear, the optimized code looks like this:
from collections import Counter
with open("Only1ID.txt") as in_file, open("output.txt", "w") as out_file:
counter = Counter(line.lower().rstrip("\n") for line in in_file)
for item in sorted(counter):
print >> out_file item.title(), counter[item]
Okay so if my file looks like this:
"1111-11-11";1;99.9;11;11.1;11.1
"2222-22-22";2;88.8;22;22.2;22.2
"3333-33-33";3;77.7;3.3;33.3;33.3
How I can read only parts "99.9", "88.8" and "77.7" from that file and make a list [99.9, 88.8, 77.7]? Basically I want to find parts after n semicolons.
You can open the file and read each line with the open command for csv your code might look like:
import csv
with open('filename.csv', 'rb') as f:
reader = csv.reader(f)
listOfRows = list(reader)
You will now have a list of lines, each line requires some processing.
if you lines always have the same structure you can split them by a ;
list_in_line= line.split(";")
and get the third element in that line.
Please show us some of your work, or better explain the structure of your data