Parse CSV efficiently in python - python-2.7

I am writing a CSV parser which has following structure
class decode:
def __init__(self):
self.fd = open('test.csv')
def decodeoperation(self):
for row in self.fd:
getcmd = self.decodecmd(row)
if cmd == 'A'
self.decodeAopt()
elif cmd == 'B':
self.decodeBopt()
def decodeAopt(self):
for row in self.fd:
#decodefurther dependencies based on cmd A till
#a condition occurs on any further row
return
def decodeBopt(self):
for row in self.fd:
#decodefurther dependencies based on cmd B till
#a condition occurs on any further row
return
The current code is working fine for me but I am not feeling good to iterate through the CSV file in all the methods. Could it be done in a better way?

There is nothing inherently wrong with using a common iterator across multiple methods, as long as you can determine in advance which method to dispatch to at any given point in the sequence (which you are doing by decoding the cmd from the row and getting 'A', 'B', etc.). The design has issues if you have to read several items before you could determine which method to call, and might have to back up if you picked the wrong method and needed to try another. In parsing, this is called backtracking. Since you are passing around a file object, backing up is difficult. Note that your separate decoder methods will have to know when to stop before reading the next row that contains a command, so they will need some sort of terminating sentinel row that they can recognize.
Some general comments on your Python and class design:
You have a nice simple if-elif-elif dispatch table that can translate to a Python dict like this:
# put this code in place of your "if cmd == ... elif elif elif..." code
dispatch = {
# note - no ()'s, we just want to reference the methods, not call them
'A': self.decodeAopt,
'B': self.decodeBopt,
'C': self.decodeCopt,
# look how easy it is to add more decoders
}
# lookup which decoder to use for the current cmd
decoder = dispatch[cmd]
# run it
decoder()
# or do it all in one line
dispatch[cmd]()
Instead of having your __init__ method open a file, let it accept an iterator object. This will make it much easier to write tests for your object, since you'll be able to pass simple Python lists containing CSV rows.
class decode:
def __init__(self, sequence):
self.fd = sequence
You might want to rename this var from 'fd' to something like 'seq', since it doesn't have to be a file, but could be any iterable that gives you decodable rows.
If you are doing your own CSV parsing, look at using the builtin csv module. It will do quite a bit of work for you, like parsing quoted strings that could contain commas, and can give you easy-to-work-with dicts for each row, given headers read from the input file, or specified by you. If you have modified __init__ as I suggested, you can use it like:
import csv
# assuming test.csv has a header row
reader = csv.DictReader(open('test.csv'))
# or specify headers if not - I encourage you to give these columns better names
reader.fieldnames = ['cmd', 'val1', 'val2', 'val3']
decoder = decode(reader)
decoder.decodeoperation()
Then you can write in decodeoperation:
cmd = row['cmd']
Note that this would impart a slightly different design to your class, that it would expect to be given a sequence of dicts, rather than a sequence of strings.

Related

Converting segments of large .cif files to smaller .pdb files

I'm trying to carve out some binding sites with ligands from cif-files of ribosome crystal structures, and have encountered an annoying problem involving a type error.
TypeError: %c requires int or char
Using the code below,
from Bio.PDB import *
from Bio import PDB
class save_res(Select):
def accept_residue(self, residue):
if residue in keep_res_list:
print(residue)
return 1
else:
return 0
keep_res_list = []
parser = MMCIFParser()
structure = parser.get_structure("1vvj.cif", "./1vvj.cif")
structure = structure[0]
atom_list = Selection.unfold_entities(structure, "A") # A for atoms
ns = NeighborSearch(atom_list)
for residue in structure.get_residues():
if residue.get_resname() == "PAR":
for atom in residue:
center = atom.get_coord()
neighbors = ns.search(center, 5.0)
neighbor_residue_list = Selection.unfold_entities(neighbors, "R")
for res in neighbor_residue_list:
if res not in keep_res_list:
keep_res_list.append(res)
io = PDBIO()
io.set_structure(structure)
io.save("1vvj_bs.pdb", save_res())
gives me the error:
File "/scratch/software/anaconda3/envs/my-devel-3.6/lib/python3.6/site-packages/Bio/PDB/PDBIO.py", line 112, in _get_atom_line
return _ATOM_FORMAT_STRING % args
TypeError: %c requires int or char
This code works well when changing the pdb-id to 1fyb, which also has the same ligand id.
I'm thinking the problem stems from the vast amounts of chains and their IDs in the original file. Am I completely wrong in this assumption or does anyone know how to fix this? I've been trying to find a way to rename the chain IDs, but haven't found a viable method to do this.
Your help is appreciated.
The chain name format in _ATOM_FORMAT_STRING is %c, while in this case you have chain named QA.
Chain names in PDB files were traditionally single characters.
But there are only so many letters and digits. For ribosome it's necessary to use longer names. The pdb format has space for a second letter -- empty column on the left from the 1-character chain name. Many programs support it, but not all, and this is not part of the official specification.
So you can either use PDB files with 2-character chains (if the rest of your workflow supports it) or rename chains in the output (your output is only a tiny part of the original structure).
Here is how to do it in gemmi:
import gemmi
structure = gemmi.read_structure('1vvj.cif')
model = structure[0]
ns = gemmi.NeighborSearch(model, structure.cell, 5.0).populate()
for chain in model:
for residue in chain:
if residue.name == 'PAR':
for atom in residue:
for nb in ns.find_neighbors(atom):
nb.to_cra(model).residue.flag = 'y'
sel = gemmi.Selection().set_residue_flags('y')
new_structure = sel.copy_structure_selection(structure)
#new_structure.remove_empty_chains()
#new_structure.shorten_chain_names()
new_structure.write_minimal_pdb('1vvj-par.pdb')
The two commented out lines are renaming the chains.
One difference comparing with your code is that the NeighborSearch in gemmi is symmetry-aware. It finds also nearby atoms from symmetry mates. In BioPython you search only in asymmetric unit (asu).
Both are different than the biological assembly --
PDB-101 covers it nicely.
If you'd like to search in asu only -- replace structure.cell with gemmi.UnitCell() above, i.e. don't pass the unit cell information.
(You can ask such questions on bioinformatics.SE -- it should get answer sooner there).

Why must I run this code a few times before my entire .csv file is converted into a .yaml file?

I am trying to build a tool that can convert .csv files into .yaml files for further use. I found a handy bit of code that does the job nicely from the link below:
Convert CSV to YAML, with Unicode?
which states that the line will take the dict created by opening a .csv file and dump it to a .yaml file:
out_file.write(ry.safe_dump(dict_example,allow_unicode=True))
However, one small kink I have noticed is that when it is run once, the generated .yaml file is typically incomplete by a line or two. In order to have the .csv file exhaustively read through to create a complete .yaml file, the code must be run two or even three times. Does anybody know why this could be?
UPDATE
Per request, here is the code I use to parse my .csv file, which is two columns long (with a string in the first column and a list of two strings in the second column), and will typically be 50 rows long (or maybe more). Also note that it designed to remove any '\n' or spaces that could potentially cause problems later on in the code.
csv_contents={}
with open("example1.csv", "rU") as csvfile:
green= csv.reader(csvfile, dialect= 'excel')
for line in green:
candidate_number= line[0]
first_sequence= line[1].replace(' ','').replace('\r','').replace('\n','')
second_sequence= line[2].replace(' ','').replace('\r','').replace('\n','')
csv_contents[candidate_number]= [first_sequence, second_sequence]
csv_contents.pop('Header name', None)
Ultimately, it is not that important that I maintain the order of the rows from the original dict, just that all the information within the rows is properly structured.
I am not sure what would cause could be but you might be running out of memory as you create the YAML document in memory first and then write it out. It is much better to directly stream it out.
You should also note that the code in the question you link to, doesn't preserve the order of the original columns, something easily circumvented by using round_trip_dump instead of safe_dump.
You probably want to make a top-level sequence (list) as in the desired output of the linked question, with each element being a mapping (dict).
The following parses the CSV, taking the first line as keys for mappings created for each following line:
import sys
import csv
import ruamel.yaml as ry
import dateutil.parser # pip install python-dateutil
def process_line(line):
"""convert lines, trying, int, float, date"""
ret_val = []
for elem in line:
try:
res = int(elem)
ret_val.append(res)
continue
except ValueError:
pass
try:
res = float(elem)
ret_val.append(res)
continue
except ValueError:
pass
try:
res = dateutil.parser.parse(elem)
ret_val.append(res)
continue
except ValueError:
pass
ret_val.append(elem.strip())
return ret_val
csv_file_name = 'xyz.csv'
data = []
header = None
with open(csv_file_name) as inf:
for line in csv.reader(inf):
d = process_line(line)
if header is None:
header = d
continue
data.append(ry.comments.CommentedMap(zip(header, d)))
ry.round_trip_dump(data, sys.stdout, allow_unicode=True)
with input xyz.csv:
id, title_english, title_russian
1, A Title in English, Название на русском
2, Another Title, Другой Название
this generates:
- id: 1
title_english: A Title in English
title_russian: Название на русском
- id: 2
title_english: Another Title
title_russian: Другой Название
The process_line is just some sugar that tries to convert strings in the CSV file to more useful types and strings without leading spaces (resulting in far less quotes in your output YAML file).
I have tested the above on files with 1000 rows, without any problems (I won't post the output though).
The above was done using Python 3 as well as Python 2.7, starting with a UTF-8 encoded file xyz.csv. If you are using Python 2, you can try unicodecsv if you need to handle Unicode input and things don't work out as well as they did for me.

Does csv.DictReader store file in memory?

I have to read a large CSV file almost of 100K rows in the file, also it will be very easier to process that file if I can read each file row in a dictionary format.
After little research I found python's built-in function csv.DictReader from the csv module.
But in the documentation it is not clear mentioned whether it stores whole file in memory or not.
But it has mentioned that:
The fieldnames parameter is a sequence whose elements are associated with the fields of the input data in order.
But I'm not sure whether sequence is stored in memory or not.
So the question is, does it store whole file in the memory?
If so, is there any other option to read single row as a generaror expression from the file and read get row as dict .
Here is my code:
def file_to_dictionary(self, file_path):
"""Read CSV rows as a dictionary """
file_data_obj ={}
try:
self.log("Reading file: [{}]".format(file_path))
if os.path.exists(file_path):
file_data_obj = csv.DictReader(open(file_path, 'rU'))
else:
self.log("File does not exist: {}".format(file_path))
except Exception as e:
self.log("Failed to read file.", e, True)
return file_data_obj
As far as im aware the DictReader object you create, in your case file_data_obj, is a generator type object.
Generator objects are not stored in memory but can only be iterated over once!
To print the fieldnames of your data as a list you can simply use: print file_data_obj.fieldnames
Secondly, in my experience I find it much easier to use a list of dictionaries when reading data from csv files, where each dictionary represents a row in your file. Consider the following:
def csv_to_dict_list(path):
csv_in = open(path, 'rb')
reader = csv.DictReader(csv_in, restkey=None, restval=None, dialect='excel')
fields = reader.fieldnames
list_out = [row for row in reader]
return list_out, fields
Using the function above (or something similar), you can acheive your goal with a couple of lines. Eg:
data, data_fields = csv_to_dict_list(path)
print data_fields (prints fieldnames)
print data[0] (prints first row of data from file)
Hope this helps!
Luke

Python Generator : Not able to generate multiple files

I have a file of 65,000 docs and their contents. I have broken this file in two data sets as training and test data set. I want to break the training data set in small files by number of lines and train my model but the code is producing only first break up and keeps on producing that. Most probably, I am consuming the used generator every time. I have posted the code for reference below. Any improvement or logical error finding will be widely appreciated. Thanks.
Code to create training and test data sets :
fo = open('desc_py_output.txt','rb')
def generate_train_test(doc_iter,size):
while True:
data = [line for line in itertools.islice(doc_iter, size)]
if not data:
break
yield data
for i,line in enumerate(generate_train_test(fo,50000)):
if(i==0):
training_data = line
else:
test_data = line
Now I am trying to create small files of 5000 docs using the following code:
def generate_in_chunks(doc_iter,size):
while True:
data = [line for line in itertools.islice(doc_iter, size)]
if not data:
break
yield data
for i,line in enumerate(generate_in_chunks(training_data,5000)):
x = [member.split('^')[2] for member in line]
y = [member.split('^')[1] for member in line]
print x[0]
this is printing same documents again and again.
The generate_train_test function yields lists, thus in your generate_in_chunks function, doc_iter is a list, not an iterator. A list does not get consumed, thus the islice will always start again from the beginning. Make sure doc_iter is an iterator at the beginning, then it will work. Also, it seems like you can use the same function for both.
def chunkify(doc_iter, size):
doc_iter = iter(doc_iter) # make sure doc_iter really is an iterator
while True:
data = [line for line in itertools.islice(doc_iter, size)]
if not data:
break
yield data
Alternatively, you could return a generator instead of a list, but this will only work if you consume that generator before yielding the next one (otherwise you'd get into an infinite loop). In that case, you could use something like this.

Python - null object pattern with generators

It is apparently Pythonic to return values that can be treated as 'False' versions of the successful return type, such that if MyIterableObject: do_things() is a simple way to deal with the output whether or not it is actually there.
With generators, bool(MyGenerator) is always True even if it would have a len of 0 or something equally empty. So while I could write something like the following:
result = list(get_generator(*my_variables))
if result:
do_stuff(result)
It seems like it defeats the benefit of having a generator in the first place.
Perhaps I'm just missing a language feature or something, but what is the pythonic language construct for explicitly indicating that work is not to be done with empty generators?
To be clear, I'd like to be able to give the user some insight as to how much work the script actually did (if any) - contextual snippet as follows:
# Python 2.7
templates = files_from_folder(path_to_folder)
result = list(get_same_sections(templates)) # returns generator
if not result:
msg("No data to sync.")
sys.exit()
for data in result:
for i, tpl in zip(data, templates):
tpl['sections'][i]['uuid'] = data[-1]
msg("{} sections found to sync up.".format(len(result)))
It works, but I think that ultimately it's a waste to change the generator into a list just to see if there's any work to do, so I assume there's a better way, yes?
EDIT: I get the sense that generators just aren't supposed to be used in this way, but I will add an example to show my reasoning.
There's a semi-popular 'helper function' in Python that you see now and again when you need to traverse a structure like a nested dict or what-have-you. Usually called getnode or getn, whenever I see it, it reads something like this:
def get_node(seq, path):
for p in path:
if p in seq:
seq = seq[p]
else:
return ()
return seq
So in this way, you can make it easier to deal with the results of a complicated path to data in a nested structure without always checking for None or try/except when you're not actually dealing with 'something exceptional'.
mydata = get_node(my_container, ('path', 2, 'some', 'data'))
if mydata: # could also be "for x in mydata", etc
do_work(mydata)
else:
something_else()
It's looking less like this kind of syntax would (or could) exist with generators, without writing a class that handles generators in this way as has been suggested.
A generator does not have a length until you've exhausted its iterations.
the only way to get whether it's got anything or not, is to exhaust it
items = list(myGenerator)
if items:
# do something
Unless you wrote a class with attribute nonzero that internally looks at your iterations list
class MyGenerator(object):
def __init__(self, items):
self.items = items
def __iter__(self):
for i in self.items:
yield i
def __nonzero__(self):
return bool(self.items)
>>> bool(MyGenerator([]))
False
>>> bool(MyGenerator([1]))
True
>>>