how to select every 5th row in a .csv file using python - python-2.7

just a normal .csv file
with the first row has titles for each column.
I wonder how to create a new .csv file that has the same header (first row), but contains every 5th rows of the original file?
thank you!

This will take any text file and output the first and every 5th line after that. It doesn't have to be manipulated as a .csv, if the columns aren't being accessed:
with open('a.txt') as f:
with open('b.txt','w') as out:
for i,line in enumerate(f):
if i % 5 == 0:
out.write(line)

This will read the file one line at a time and only write rows 5, 10, 15, 20...
import csv
count = 0
# open files and handle headers
with open('input.csv') as infile:
with open('ouput.csv', 'w') as outfile:
reader = csv.DictReader(infile)
writer = csv.DictWriter(outfile, fieldnames=reader.fieldnames)
writer.writeheader()
# iterate through file and write only every 5th row
for row in reader:
count += 1
if not count % 5:
writer.writerow(row)
(work with Python 2 and 3)
If you'd prefer to start with data row #1 to write lines 1, 6, 11, 16... at the top change to:
count = -1

If you want to use the csv library, a tighter version would be...
import csv
# open files and handle headers
with open('input.csv') as infile:
with open('ouput.csv', 'w') as outfile:
reader = csv.DictReader(infile)
writer = csv.DictWriter(outfile, fieldnames=reader.fieldnames)
writer.writeheader()
# iterate through file and write only every 5th row
writer.writerows([x for i,x in enumerate(reader) if i % 5 == 4])

Related

Saving line after line from a txt file

So I wrote this code:
import csv
data = []
filename = "S:\Doc\Python\Data\Dekomp\Hth.txt"
with open(filename) as f:
lines = f.readlines()
for line in lines:
if line.startswith('%'):
data.append(line.split('+')[0].strip())
if line.endswith('%'):
break
with open('S:\Doc\Python\Data\Dekomp\Test.csv', 'w') as f:
writer = csv.writer(f, delimiter=' ')
for line in data:
writer.writerow(line.split())
And my data looks like this:
Headline starts with "%th=number", while number changes from 2 to 180 (each segment plus 2, so it goes (2,4,6... up to180).
Between those segments I have three columns of data, which I would like to append to a csv file. While using my code I save only headliners so (%th=2, %th=4... %th=180). Do you have any idea how to change my code so it will start reading headline, then append data below to a .txt or .csv file, and then starts loop again when it "sees" another headline and continue the process with saving next segment to another file, and that up to "%th=180"?
UPDATE:
Input:
Expected output:
That the program will append to another file all the data below "%th=number", and then when the following segment appears it will save to another file, and the process will continue till the end of this file.
In other words each segment starts with even number so (2, 4, 6, 8 ... 180) so I should get 90 files, each for every segment.
UPDATE 2:
So I have change my code:
with open("S:\Doc\Python\Data\Dekomp\Hth.txt", 'r') as f:
with open("S:\Doc\Python\Data\Dekomp\Hth2.txt", 'w') as g:
for line in f:
if line.startswith("%"):
g.write(line)
if line.endswith("%"):
break
But right now the problem is that if I put this startswith and endswith python will save only headliner, if I delete them, the obivous thing happens, it saves everything from input file.
data = []
filename = "S:\Doc\Python\Data\Dekomp\Hth.txt"
with open(filename) as f:
lines = f.readlines() # Reading file
def _get_all_starting_index(data): # Calculating index of all lines starting with %
return [data.index(line) for line in data if line.startswith("%")]
indices= _get_all_starting_index(lines)
data_info_to_write_in_file = {} # for storing data to write in each individual file
for i in range(len(indices)): # looping over number of indices
key = lines[indices[i]] # key value for starting of a segment.
end_point = indices[i+1] if len(indices) > i+1 else len(indices) # finding end point.
lines_to_get = lines[indices[i]+1 : end_point] # getting lines in between and storing it in dictionary
data_info_to_write_in_file[key] = lines_to_get
for key in data_info_to_write_in_file.keys(): # writing info in each individual file
filename = "S:\Doc\Python\Data\Dekomp\{}.txt".format(key.strip().split("=")[-1])
with open(filename, 'w') as f:
for line in data_info_to_write_in_file[key]:
f.write(line)
Hope it will help.
Feel free to get any info.

Python: Write two columns in csv for many lines

I have two parameters like filename and time and I want to write them in a column in a csv file. These two parameters are in a for-loop so their value is changed in each iteration.
My current python code is the one below but the resulting csv is not what I want:
import csv
import os
with open("txt/scalable_decoding_time.csv", "wb") as csv_file:
writer = csv.writer(csv_file, delimiter=',')
filename = ["one","two", "three"]
time = ["1","2", "3"]
zipped_lists = zip(filename,time)
for row in zipped_lists:
print row
writer.writerow(row)
My csv file must be like below. The , must be the delimeter. So I must get two columns.
one, 1
two, 2
three, 3
My csv file now reads as the following picture. The data are stored in one column.
Do you know how to fix this?
Well, the issue here is, you are using writerows instead of writerow
import csv
import os
with open("scalable_decoding_time.csv", "wb") as csv_file:
writer = csv.writer(csv_file, delimiter=',')
level_counter = 0
max_levels = 3
filename = ["one","two", "three"]
time = ["1","2", "3"]
while level_counter < max_levels:
writer.writerow((filename[level_counter], time[level_counter]))
level_counter = level_counter +1
This gave me the result:
one,1
two,2
three,3
Output:
This is another solution
Put the following code into a python script that we will call sc-123.py
filename = ["one","two", "three"]
time = ["1","2", "3"]
for a,b in zip(filename,time):
print('{}{}{}'.format(a,',',b))
Once the script is ready, run it like that
python2 sc-123.py > scalable_decoding_time.csv
You will have the results formatted the way you want
one,1
two,2
three,3

Python3: split up list and save as file - how to?

I'm kinda new to Python, so thx for your help!
I want to tell Python to take a big .csv list and split it up to many small lists of only two columns
Take this .csv file
Always use column "year" which is the first column
Then take always the next column (for-loop?), starting with column 2 which is "Object1", then column 3 which is "Object2" and so on...
Save each list as .csv - now only containing two columns - and name it after the second column (f.e. "Object1")
So far I am up to this:
import csv
object = 0
f = open("/home/Data/data.csv")
csv_f = csv.reader(f, delimiter=';', quotechar='|')
writer = csv.writer(csv_f)
for row in csv_f:
writer("[0],[object]")
object += 1
f.close()
Your code is trying to open the same file for reading and writing, which may have unexpected results.
Think about your problem as a series of steps; one way to approach the problem is:
Open the big file
Read the first line of the file, which contains the column titles.
Go through the column titles (the first line of your big csv file), skipping the first one, then:
For each column title, create a new csv file, where the filename is the name of the column.
Take the value of the first column, plus the value of the column you are currently reading, and write it to the file.
Repeat till all column titles are read
Close the file
Close the big file.
Here is the same approach as above, taking advantage of Python's csv reading capabilities:
import csv
with open('big-file.csv') as f:
reader = csv.reader(f, delimiter=';', quotechar='|')
titles = next(reader)
for index, column_name in enumerate(titles[1:]):
with open('{}.csv'.format(column_name), 'w') as i:
writer = csv.writer(i, delimiter=';', quotechar='|')
for row in reader:
writer.writerow((row[0],row[index+1]))
f.seek(0) # start from the top of the big file again
next(reader) # skip the header column

Mutliple output files created but empty

I am trying to split one file with two articles in it into two separate files with one article in each, for subsequent analysis of the articles. Each article in the initial file has an ID that I want to use to separate the files with, using RE.
Below is the initial input file, with ID number:
166068619 #### "Epilepsy: let's end our ignorance of this neglected condition
Helen Stephens is a young woman with epilepsy [...]."
106899978 #### "Great British Payoff shows that BBC governance is broken
If it was a television series, they'd probably call it [...]."
However, when I run my code, I do get two separate files as an output but they are empty.
This is my code:
def file_split(path_to_file):
"""Function splits bigger file into N smaller ones, based on a certain RE
match, that is used to break the bigger file into smaller ones"""
def pattern_extract(path_to_file):
"""Function identifies the number of RE occurences in a file,
No. can be used in further analysis as range No."""
import re
x = []
with open(path_to_file) as f:
for line in f:
match = re.search(r'^\d+?\t####\t', line)
if match:
a = match.group()
x.append(a)
return len(x)
y = pattern_extract(path_to_file)
m = y + 1
files = [open('filename%i.txt' %i, 'w') for i in range(1,m)]
with open(path_to_file) as f:
for line in f:
match = re.search(r'^\d+?\t####\t', line)
if match:
a = match.group()
#files = [open('filename%i.txt' %i, 'w') for i in range(1, m)]
files[i-1].write(a)
for f in files:
f.close()
return files
Output result is as follows:
file_split(path)
Out[19]:
[<open file 'filename1.txt', mode 'w' at 0x7fe121b130c0>,
<open file 'filename2.txt', mode 'w' at 0x7fe121b131e0>]
I am new to Python and I am not quite sure where the problem lies. I checked some other answers that addressed the multiple file outputs but cannot figure out the solution. Help would be very much appreciated.
There are two problems with your code:
you write only the line matching the ID (actually, just the match itself), not the rest
you are always writing to the last file, as you use i, the loop variable "left over" from the list comprehension
To fix it, you could change the lower portion of your code to this:
y = pattern_extract(path_to_file)
files = [open('filename%i.txt' %i, 'w') for i in range(y)]
n = -1
with open(path_to_file) as f:
for line in f:
if re.search(r'^\d+\s+####\s+', line):
n += 1
files[n].write(line)
But you do not have to read the file two times at all, just to count the matches: Just open another file when the line matches an ID line and directly write to that last file in the list, then close all the files.
open_files = []
with open(path_to_file) as f:
for line in f:
if re.search(r'^\d+\s+####\s+', line):
open_files.append(open('filename%d.txt' % len(open_files), 'w'))
open_files[-1].write(line)
for f in open_files:
f.close()

Extracting columnar data correctly as it is in the file

Suppose i have tabular column as below.Now i want to extract the column wise data.I tried extracting data by creating a list.But it is extracting the first row correctly but from second row onwards there is space i.e under CEN/4.Now my code considers zeroth column has 5.0001e-1 form second row,it starts reading from there. How to extract the data correctly coulmn wise.output is scrambled.
0 1 25 CEN/4 -5.000000E-01 -3.607026E+04 -5.747796E+03 -8.912796E+02 -88.3178
5.000000E-01 3.607026E+04 5.747796E+03 8.912796E+02 1.6822
27 -5.000000E-01 -3.641444E+04 -5.783247E+03 -8.912796E+02 -88.3347
5.000000E-01 3.641444E+04 5.783247E+03 8.912796E+02 1.6653
28 -5.000000E-01 -3.641444E+04 -5.712346E+03 -8.912796E+02 -88.3386
5.000000E-01 3.641444E+04 5.712346E+03 8.912796E+02
my code is :
f1=open('newdata1.txt','w')
L = []
for index, line in enumerate(open('Trial_1.txt','r')):
#print index
if index < 0: #skip first 5 lines
continue
else:
line =line.split()
L.append('%s\t%s\t %s\n' %(line[0], line[1],line[2]))
f1.writelines(L)
f1.close()
my output looks like this:
0 1 CEN/4 -5.000000E-01 -5.120107E+04
5.000000E-01 5.120107E+04 1.028093E+04 5.979930E+03 8.1461
i want columnar data as it is in the file.How to do that.I am a bgeinner
its hard to tell from the way the input data is presented in your question, but Im guessing your file is using tabs to separate columns, in any case, consider using python csv module with the relevant delimiter like:
import csv
with open('input.csv') as f_in, open('newdata1', 'w') as f_out:
reader = csv.reader(f_in, delimiter='\t')
writer = csv.writer(f_out, delimiter='\t')
for row in reader:
writer.writerow(row)
see python csv module documentation for further details