Save multiple lines of text in .txt - python-2.7

I am a python newbie. I can print the twitter search results, but when I save to .txt, I only get one result. How do I add all the results to my .txt file?
t = Twython(app_key=api_key, app_secret=api_secret, oauth_token=acces_token, oauth_token_secret=ak_secret)
tweets = []
MAX_ATTEMPTS = 10
COUNT_OF_TWEETS_TO_BE_FETCHED = 500
for i in range(0,MAX_ATTEMPTS):
if(COUNT_OF_TWEETS_TO_BE_FETCHED < len(tweets)):
break
if(0 == i):
results = t.search(q="#twitter",count='100')
else:
results = t.search(q="#twitter",include_entities='true',max_id=next_max_id)
for result in results['statuses']:
tweet_text = result['user']['screen_name'], result['user']['followers_count'], result['text'], result['created_at'], result['source']
tweets.append(tweet_text)
print tweet_text
text_file = open("Output.txt", "w")
text_file.write("#%s,%s,%s,%s,%s" % (result['user']['screen_name'], result['user']['followers_count'], result['text'], result['created_at'], result['source']))
text_file.close()

You just need to rearrange your code to open the file BEFORE you do the loop:
t = Twython(app_key=api_key, app_secret=api_secret, oauth_token=acces_token, oauth_token_secret=ak_secret)
tweets = []
MAX_ATTEMPTS = 10
COUNT_OF_TWEETS_TO_BE_FETCHED = 500
with open("Output.txt", "w") as text_file:
for i in range(0,MAX_ATTEMPTS):
if(COUNT_OF_TWEETS_TO_BE_FETCHED < len(tweets)):
break
if(0 == i):
results = t.search(q="#twitter",count='100')
else:
results = t.search(q="#twitter",include_entities='true',max_id=next_max_id)
for result in results['statuses']:
tweet_text = result['user']['screen_name'], result['user']['followers_count'], result['text'], result['created_at'], result['source']
tweets.append(tweet_text)
print tweet_text
text_file.write("#%s,%s,%s,%s,%s" % (result['user']['screen_name'], result['user']['followers_count'], result['text'], result['created_at'], result['source']))
text_file.write('\n')
I use Python's with statement here to open a context manager. The context manager will handle closing the file when you drop out of the loop. I also added another write command that writes out a carriage return so that each line of data would be on its own line.
You could also open the file in append mode ('a' instead of 'w'), which would allow you to remove the 2nd write command.

There are two general solutions to your issue. Which is best may depend on more details of your program.
The simplest solution is just to open the file once at the top of your program (before the loop) and then keep reusing the same file object over and over in the later code. Only when the whole loop is done should the file be closed.
with open("Output.txt", "w") as text_file:
for i in range(0,MAX_ATTEMPTS):
# ...
for result in results['statuses']:
# ...
text_file.write("#%s,%s,%s,%s,%s" % (result['user']['screen_name'],
result['user']['followers_count'],
result['text'],
result['created_at'],
result['source']))
Another solution would be to open the file several times, but to use the "a" append mode when you do so. Append mode does not truncate the file like "w" write mode does, and it seeks to the end automatically, so you don't overwrite the file's existing contents. This approach would be most appropriate if you were writing to several different files. If you're just writing to the one, I'd stick with the first solution.
for i in range(0,MAX_ATTEMPTS):
# ...
for result in results['statuses']:
# ...
with open("Output.txt", "a") as text_file:
text_file.write("#%s,%s,%s,%s,%s" % (result['user']['screen_name'],
result['user']['followers_count'],
result['text'],
result['created_at'],
result['source']))
One last point: It looks like you're writing out comma separated data. You may want to use the csv module, rather than writing your file manually. It can take care of things like quoting or escaping any commas that appear in the data for you.

Related

matching an entire list with each and every line of file

I had written a piece of code that basically performs find and replace from a list on a text file.
So, it maps the entire list into a dictionary. Then from text file each and every line is processed and is matched with entire list in the dictionary if a match anywhere in the line is found it replaces with corresponding value from the list(dictionary).
Here is the code:
import sys
import re
#open file using open file mode
fp1 = open(sys.argv[1]) # Open file on read mode
lines = fp1.read().split("\n") # Create a list containing all lines
fp1.close() # Close file
fp2 = open(sys.argv[2]) # Open file on read mode
words = fp2.read().split("\n") # Create a list containing all lines
fp2.close() # Close file
word_hash = {}
for word in words:
#print(word)
if(word != ""):
tsl = word.split("\t")
word_hash[tsl[0]] = tsl[1]
#print(word_hash)
keys = word_hash.keys()
#skeys = sorted(keys, key=lambda x:x.split(" "),reverse=True)
#print(keys)
#print (skeys)
for line in lines:
if(line != ""):
for key in keys:
#my_regex = key + r"\b"
my_regex = r"([\"\( ])" + key + r"([ ,\.!\"।)])"
#print(my_regex)
if((re.search(my_regex, line, re.IGNORECASE|re.UNICODE))):
line = re.sub(my_regex, r"\1" + word_hash[key]+r"\2",line,flags=re.IGNORECASE|re.UNICODE|re.MULTILINE)
#print("iam :1",line)
if((re.search(key + r"$", line, re.IGNORECASE|re.UNICODE))):
line = re.sub(key+r"$", word_hash[key],line,flags=re.IGNORECASE|re.UNICODE|re.MULTILINE)
#print("iam :2",line)
if((re.search(r"^" + key, line, re.IGNORECASE|re.UNICODE))):
#print(line)
line = re.sub(r"^" + key, word_hash[key],line,flags=re.IGNORECASE|re.UNICODE|re.MULTILINE)
#print("iam :",line)
print(line)
else:
print(line)
Problem here is when the list size grows execution slows up as all the lines of text file are matched with each and every key in list. So where can I improve the execution of this code.
List file:
word1===>replaceword1
word2===>replaceword2
.....
List is tab seperated. Here I used ===> for easy understanding.
Input file:
hello word1 I am here.
word2. how are you word1?
Expected Output:
hello replaceword1 I am here.
replaceword2. how are you replaceword1?
If your word list is small enough, the best speedup you can achieve with the match-and-replace process is to use a single big regexp and use a functionnal re.sub
This way you have a single call to the optimised function.
EDIT: In order to preserve order of replacements (this can lead to chain replacing, don't know if intended behavior) we can perform replacement by batches rather than in a single run, where batches order respects file order and each batch is made of disjoint possible string matches.
The code would be as follow
import sys
import re
word_hashes = []
def insert_word(word, replacement, hashes):
if not hashes:
return [{word: replacement}]
for prev_word in hashes[0]:
if word in prev_word or prev_word in word:
return [hashes[0]] + insert_word(word, replacement, hashes[1:])
hashes[0][word] = replacement
return hashes
with open(sys.argv[2]) as fp2: # Open file on read mode
words = fp2.readlines()
for word in [w.strip() for w in words if w.strip()]:
tsl = word.split("\t")
word_hashes = insert_word(tsl[0],tsl[1], word_hashes)
#open file using open file mode
lines = []
with open(sys.argv[1]) as fp1:
content = fp1.read()
for word_hash in word_hashes:
my_regex = r"([\"\( ])(" + '|'.join(word_hash.keys()) + r")([ ,\.!\"।)])"
content = re.sub(my_regex, lambda x: x.group(1) + word_hash[x.group(2)] + x.group(3) ,content,flags=re.IGNORECASE|re.UNICODE|re.MULTILINE)
print(content)
We obtain chained replacement for the example data. For example, with the following words to replace
roses are red==>flowers are blue
are==>is
Text to parse
roses are red and beautiful
flowers are yellow
Output
roses is red and beautiful
flowers is yellow
Why don't you read the content of the entire file in a string, and just do string.replace. For example.
def find_replace():
txt = ''
#Read text from the file as a string
with open('file.txt', 'r') as fp:
txt = fp.read()
dct = {"word1":"replaceword1","word2":"replaceword2"}
#Find and replace characters
for k,v in dct.items():
txt = txt.replace(k,v)
#Write back the modified string
with open('file.txt', 'w') as fp:
fp.write(txt)
If the input file is:
hello word1 I am here.
word2. how are you word1?
The output will be:
hello replaceword1 I am here.
replaceword2. how are you replaceword1?

Saving line after line from a txt file

So I wrote this code:
import csv
data = []
filename = "S:\Doc\Python\Data\Dekomp\Hth.txt"
with open(filename) as f:
lines = f.readlines()
for line in lines:
if line.startswith('%'):
data.append(line.split('+')[0].strip())
if line.endswith('%'):
break
with open('S:\Doc\Python\Data\Dekomp\Test.csv', 'w') as f:
writer = csv.writer(f, delimiter=' ')
for line in data:
writer.writerow(line.split())
And my data looks like this:
Headline starts with "%th=number", while number changes from 2 to 180 (each segment plus 2, so it goes (2,4,6... up to180).
Between those segments I have three columns of data, which I would like to append to a csv file. While using my code I save only headliners so (%th=2, %th=4... %th=180). Do you have any idea how to change my code so it will start reading headline, then append data below to a .txt or .csv file, and then starts loop again when it "sees" another headline and continue the process with saving next segment to another file, and that up to "%th=180"?
UPDATE:
Input:
Expected output:
That the program will append to another file all the data below "%th=number", and then when the following segment appears it will save to another file, and the process will continue till the end of this file.
In other words each segment starts with even number so (2, 4, 6, 8 ... 180) so I should get 90 files, each for every segment.
UPDATE 2:
So I have change my code:
with open("S:\Doc\Python\Data\Dekomp\Hth.txt", 'r') as f:
with open("S:\Doc\Python\Data\Dekomp\Hth2.txt", 'w') as g:
for line in f:
if line.startswith("%"):
g.write(line)
if line.endswith("%"):
break
But right now the problem is that if I put this startswith and endswith python will save only headliner, if I delete them, the obivous thing happens, it saves everything from input file.
data = []
filename = "S:\Doc\Python\Data\Dekomp\Hth.txt"
with open(filename) as f:
lines = f.readlines() # Reading file
def _get_all_starting_index(data): # Calculating index of all lines starting with %
return [data.index(line) for line in data if line.startswith("%")]
indices= _get_all_starting_index(lines)
data_info_to_write_in_file = {} # for storing data to write in each individual file
for i in range(len(indices)): # looping over number of indices
key = lines[indices[i]] # key value for starting of a segment.
end_point = indices[i+1] if len(indices) > i+1 else len(indices) # finding end point.
lines_to_get = lines[indices[i]+1 : end_point] # getting lines in between and storing it in dictionary
data_info_to_write_in_file[key] = lines_to_get
for key in data_info_to_write_in_file.keys(): # writing info in each individual file
filename = "S:\Doc\Python\Data\Dekomp\{}.txt".format(key.strip().split("=")[-1])
with open(filename, 'w') as f:
for line in data_info_to_write_in_file[key]:
f.write(line)
Hope it will help.
Feel free to get any info.

Python Script Can not find file object to close

When running this simple script, the "output_file.csv" remains open. I am unsure about how to close the file in this scenario.
I have looked at other examples where the open() function is assigned to a variable such as 'f', and the object closed using f.close(). Because of the with / as csv-file, I am unclear as to where the file object actually is. Would anyone mind conceptually explaining where the disconnect is here? Ideally, would like to know:
how to check namespace for all open file objects
how to determine the proper method for closing these objects
simple script to read columns of data where mapping in column 1 is blank, fill down
import csv
output_file = csv.writer(open('output_file.csv', 'w'))
csv.register_dialect('mydialect', doublequote=False, quotechar="'")
def csv_writer(data):
with open('output_file.csv',"ab") as csv_file:
writer = csv.writer(csv_file, delimiter=',', lineterminator='\r\n', dialect='mydialect')
writer.writerow(data)
D = [[]]
for line in open('inventory_skus.csv'):
clean_line = line.strip()
data_points = clean_line.split(',')
print data_points
D.append([line.strip().split(',')[0], line.strip().split(',')[1]])
D2 = D
for i in range(1, len(D)):
nr = D[i]
if D[i][0] == '':
D2[i][0] = D[i-1][0]
else:
D2[i] = D[i]
for line in range(1, len(D2)):
csv_writer(D2[line])
print D2[line]
Actually, you are creating two file objects (in two different ways). First one:
output_file = csv.writer(open('output_file.csv', 'w'))
This is hidden within a csv.writer and not exposed by the same, however
you don't use that output writer at all, including not closing it. So it remains open until garbage collected.
In
with open('output_file.csv',"ab") as csv_file:
you get the file object in csv_file. The context block takes care of closing the object, so no need to close it manually (file objects are context managers).
Manually indexing over D2 is unnecessary. Also, why are you opening the CSV file in binary mode?
def write_data_row(csv_writer, data):
writer.writerow(data)
with open('output_file.csv',"w") as csv_file:
writer = csv.writer(csv_file, delimiter=',', lineterminator='\r\n', dialect='mydialect')
for line in D2[1:]:
write_data_row(writer, line)
print line

Mutliple output files created but empty

I am trying to split one file with two articles in it into two separate files with one article in each, for subsequent analysis of the articles. Each article in the initial file has an ID that I want to use to separate the files with, using RE.
Below is the initial input file, with ID number:
166068619 #### "Epilepsy: let's end our ignorance of this neglected condition
Helen Stephens is a young woman with epilepsy [...]."
106899978 #### "Great British Payoff shows that BBC governance is broken
If it was a television series, they'd probably call it [...]."
However, when I run my code, I do get two separate files as an output but they are empty.
This is my code:
def file_split(path_to_file):
"""Function splits bigger file into N smaller ones, based on a certain RE
match, that is used to break the bigger file into smaller ones"""
def pattern_extract(path_to_file):
"""Function identifies the number of RE occurences in a file,
No. can be used in further analysis as range No."""
import re
x = []
with open(path_to_file) as f:
for line in f:
match = re.search(r'^\d+?\t####\t', line)
if match:
a = match.group()
x.append(a)
return len(x)
y = pattern_extract(path_to_file)
m = y + 1
files = [open('filename%i.txt' %i, 'w') for i in range(1,m)]
with open(path_to_file) as f:
for line in f:
match = re.search(r'^\d+?\t####\t', line)
if match:
a = match.group()
#files = [open('filename%i.txt' %i, 'w') for i in range(1, m)]
files[i-1].write(a)
for f in files:
f.close()
return files
Output result is as follows:
file_split(path)
Out[19]:
[<open file 'filename1.txt', mode 'w' at 0x7fe121b130c0>,
<open file 'filename2.txt', mode 'w' at 0x7fe121b131e0>]
I am new to Python and I am not quite sure where the problem lies. I checked some other answers that addressed the multiple file outputs but cannot figure out the solution. Help would be very much appreciated.
There are two problems with your code:
you write only the line matching the ID (actually, just the match itself), not the rest
you are always writing to the last file, as you use i, the loop variable "left over" from the list comprehension
To fix it, you could change the lower portion of your code to this:
y = pattern_extract(path_to_file)
files = [open('filename%i.txt' %i, 'w') for i in range(y)]
n = -1
with open(path_to_file) as f:
for line in f:
if re.search(r'^\d+\s+####\s+', line):
n += 1
files[n].write(line)
But you do not have to read the file two times at all, just to count the matches: Just open another file when the line matches an ID line and directly write to that last file in the list, then close all the files.
open_files = []
with open(path_to_file) as f:
for line in f:
if re.search(r'^\d+\s+####\s+', line):
open_files.append(open('filename%d.txt' % len(open_files), 'w'))
open_files[-1].write(line)
for f in open_files:
f.close()

Python: Copy several files with one column into one file with multi-column

I have the following question in Python 2.7:
I have 20 different txt-files, each with exactly one column of numbers. Now - as an output - I would like to have one file with all those columns together. How can I concatenate one-column files in Python ? I was thinking about using the fileinput module, but I fear, I have to open all my different txt files at once ?
My idea:
filenames = ['input1.txt','input2.txt',...,'input20.txt']
import fileinput
with open('/path/output.txt', 'w') as outfile:
for line in fileinput.input(filenames)
write(line)
Any suggestions on that ?
Thanks for any help !
A very simply (naive?) solution is
filenames = ['a.txt', 'b.txt', 'c.txt', 'd.txt']
columns = []
for filename in filenames:
lines = []
for line in open(filename):
lines.append(line.strip('\n'))
columns.append(lines)
rows = zip(*columns)
with open('output.txt', 'w') as outfile:
for row in rows:
outfile.write("\t".join(row))
outfile.write("\n")
But on *nix (including OS X terminal and Cygwin), it's easier to
$ paste a.txt b.txt c.txt d.txt
from the command line.
My suggestion: a little functional approach. Using list comprehension to zip the file being read, to the accumulated columns, and then join them to be a string again, one column (file) at a time:
filenames = ['input1.txt','input2.txt','input20.txt']
outputfile = 'output.txt'
#maybe you need to separate each column:
separator = " "
separator_list = []
output_list = []
for f in filenames:
with open(f,'r') as inputfile:
if len(output_list) == 0:
output_list = inputfile.readlines()
separator_list = [ separator for x in range(0, len(outputlist))]
else:
input_list = inputfile.readlines()
output_list = [ ''.join(x) for x in [list(y) for y in zip(output_list, separator_list, input_list)]
with open(outputfile,'w') as output:
output.writelines(output_list)
It will keep in memory the accumulator for the result (output_list), and one file at a time (the one being read, which is also the only file open for reading), but may be a little slower, and, of course, it is not fail-proof.