read a file into python and remove values - list

I have the following code that reads in a file and stores the values into a list. It also reads each line of the file and if it sees a '(' in the file it splits it and everything that follows it on the line.
with open('filename', 'r') as f:
list1 = [line.strip().split('(')[0].split() for line in f]
Is there a way that I can change this to split not only at a '(' but also at a '#'?

Use re.split.
With a sample of your data's format, it may be possible to do better than this, but without context we can still show how to use this with the code you have provided.
To use re.split():
import re
with open('filename', 'r') as f:
list1 = [re.split('[(#]+', line.strip())[0].split() for line in f]
Notice that the first parameter in re.split() is a regular expression to split on, and the second parameter is the string to apply this operation to.
General idea from: Splitting a string with multiple delimiters in Python

Related

Making a text file which will contain my list items and applying regular expression to it

I am supposed to make a code which will read a text file containing some words with some common linguistic features. Apply some regular expression to all of the words and write one file which will have the changed words.
For now let's say my text file named abcd.txt has these words
king
sing
ping
cling
booked
looked
cooked
packed
My first question starts from here. In my simple text file how to write these words to get the above mentioned results. Shall I write them line-separated or comma separated?
This is the code provided by user palvarez.
import re
with open("new_abcd", "w+") as new, open("abcd") as original:
for word in original:
new_word = re.sub("ing$", "xyz", word)
new.write(new_word)
Can I add something like -
with open("new_abcd", "w+") as file, open("abcd") as original:
for word in original:
new_aword = re.sub("ed$", "abcd", word)
new.write(new_aword)
in the same code file? I want something like -
kabc
sabc
pabc
clabc
bookxyz
lookxyz
cookxyz
packxyz
PS - I don't know whether mentioning this is necessary or not, but I am supposed to do this for a Unicode supported script Devanagari. I didn't use it here in my examples because many of us here can't read the script. Additionally that script uses some diacritics. eg. 'का' has one consonant character 'क' and one vowel symbol 'ा' which together make 'का'. In my regular expression I need to condition the diacritics.
I think the approach you have with one word by line is better since you don't have to trouble yourself with delimiters and striping.
With a file like this:
king
sing
ping
cling
booked
looked
cooked
packed
And a code like this, using re.sub to replace a pattern:
import re
with open("new_abcd.txt", "w") as new, open("abcd.txt") as original:
for word in original:
new_word = re.sub("ing$", "xyz", word)
new_word = re.sub("ed$", "abcd", new_word)
new.write(new_word)
It creates a resulting file:
kxyz
sxyz
pxyz
clxyz
bookabcd
lookabcd
cookabcd
packabcd
I tried out with the diacritic you gave us and it seems to work fine:
print(re.sub("ा$", "ing", "का"))
>>> कing
EDIT: added multiple replacement. You can have your replacements into a list and iterate over it to do re.sub as follows.
import re
# List where first is pattern and second is replacement string
replacements = [("ing$", "xyz"), ("ed$", "abcd")]
with open("new_abcd.txt", "w") as new, open("abcd.txt") as original:
for word in original:
new_word = word
for pattern, replacement in replacements:
new_word = re.sub(pattern, replacement, word)
if new_word != word:
break
new.write(new_word)
This limits one modification per word, only the first that modifies the word is taken.
It is recommended that for starters, utilize the with context manager to open your file, this way you do not need to explicitly close the file once you are done with it.
Another added advantage is then you are able to process the file line by line, this will be very useful if you are working with larger sets of data. Writing them in a single line or csv format will then all depend on the requirement of your output and how you would want to further process them.
As an example, to read from a file and say substitute a substring, you can use re.sub.
import re
with open('abcd.txt', 'r') as f:
for line in f:
#do something here
print(re.sub("ing$",'ring',line.strip()))
>>
kring
sring
pring
clring
Another nifty trick is to manage both the input and output utilizing the same context manager like:
import re
with open('abcd.txt', 'r') as f, open('out_abcd.txt', 'w') as o:
for line in f:
#notice that we add '\n' to write each output to a newline
o.write(re.sub("ing$",'ring',line.strip())+'\n')
This create an output file with your new contents in a very memory efficient way.
If you'd like to write to a csv file or any other specific formats, I highly suggest you spend sometime to understand Python's input and output functions here. If linguistics in text is what you are going for that understand encoding of different languages and further study Python's regex operations.

Find and remove specific string from a line

I am hoping to receive some feedback on some code I have written in Python 3 - I am attempting to write a program that reads an input file which has page numbers in it. The page numbers are formatted as: "[13]" (this means you are on page 13). My code right now is:
pattern='\[\d\]'
for line in f:
if pattern in line:
re.sub('\[\d\]',' ')
re.compile(line)
output.write(line.replace('\[\d\]', ''))
I have also tried:
for line in f:
if pattern in line:
re.replace('\[\d\]','')
re.compile(line)
output_file.write(line)
When I run these programs, a blank file is created, rather than a file containing the original text minus the page numbers. Thank you in advance for any advice!
Your if statement won't work because not doing a regex match, it's looking for the literal string \[\d\] in line.
for line in f:
# determine if the pattern is found in the line
if re.match(r'\[\d\]', line):
subbed_line = re.sub(r'\[\d\]',' ')
output_file.writeline(subbed_line)
Additionally, you're using the re.compile() incorrectly. The purpose of it is to pre-compile your pattern into a function. This improves performance if you use the pattern a lot because you only evaluate the expression once, rather than re-evaluating each time you loop.
pattern = re.compile(r'\[\d\]')
if pattern.match(line):
# ...
Lastly, you're getting a blank file because you're using output_file.write() which writes a string as the entire file. Instead, you want to use output_file.writeline() to write lines to the file.
You don't write unmodified lines to your output.
Try something like this
if pattern in line:
#remove page number stuff
output_file.write(line) # note that it's not part of the if block above
That's why your output file is empty.

Change values in Python file (tab-delimited list)

I have read a *.INP file into Python. Here is the code I used:
import csv
r = csv.reader(open('T_JAC.INP')) # Here your csv file
lines = [l for l in r]
print lines[23]
print lines[26]
The first print statement produces ['9E21\t\texthere (text) text alphabets text alphanumeric'].
The second print statement produces ['4E15\t\texthere (text) text alphabets text alphanumeric'].
I need to change the numbers 7E21 and 4E15. I need to change them to values from a list fil_replace = [9E21,6E15].i.e. I need to replace 7E21 to 9E21 and I need to change 4E21 to 6E21.
Is there a way to replace these numbers?
Something with str.replace should work (as long as you read r in as a string), albeit not the most efficient solution:
r.replace('7E21', '9E21')
file = open('YAC.IN', 'w')
file.write(r)
file.close()
If you're looking for a way to just replace the values 'in place' in the file unfortunately it's not possible. The entire file has to be read in, modified, then re-written.

Python3.4 : Matching and returning list values

I have opened a text file in Python which has thousands of lines. I need to search each line to see if it contains 1 of many different specified values. I then need to return the specific value and the corresponding line that contains that value.
q1 = open('/home/lost/StockRec/StockIndex/edgar.full-index.2015.QTR1.master.idx', 'r')
list = ['1341234', '12341234', '4563456', '12341234', '6896786', '2727638']
for line in q1:
for listValue in list:
if listValue in line:
print(listValue, line)
I know this code is wrong. I need to search each line in q1 for each of the specific values in the list. I need to then print the specific list value and the line containing that value.
Unless your file is already somehow separated into lines, it looks like you will have to first split the file into lines when you import it. Right now it is returning all of it because q1 is only one line.
Look for some identifying information in your file such as new line characters ('\n') or if each line starts with a specific character.
so once you open the file you will include:
q1.split('your identifying character here')
That will split the copy of your file then you can perform the loops that you have already written

how to use pyparsing to match multiple lines while using iterator to read the file

In the definition of my Pyparsing grammar, there are some grammars which will match strings that span multiple lines.
If I use the api like:
PyGrammar.parseString(open('file_name').read())
If will behave in the correct way.
However if I want to use the iterator to read the file like
with open('file_name') as f:
for line in f:
PyGrammar.parseString(line)
the parser will break
Is there a way to work around this case. Thanks...
According to Paul(the author of pyparsing)
with open('file_name') as f:
for line in f:
PyGrammar.parseString(line)
The code above is not the correct way to use pyparsing. Pyparsing needs to see all source texts before parsing the texts. So when I call parseString with each line of text, it does not work out. Another work around is to use a wrapper for it. like:
# set up a generator to yield a line of text at a time
linegenerator = open('big_hairy_file.txt')
# buffer will accumulate lines until a fully parseable piece is found
buffer = ""
for line in linegenerator:
buffer += line
match = next(grammar.scanString(buffer), None)
while match:
tokens, start, end = match
print tokens.asList()
buffer = buffer[end:]
match = next(grammar.scanString(buffer), None)