I have one file which has follow lines:
B99990001 1 2 3 4
B99990001 1 3 3 4
B99990002 1 2 3 4
B99990002 1 3 3 4
B99990003 1 2 3 4
B99990003 1 3 3 4
So Here my aim is to make a main list which should have three sub lists based on the first columns (B99990001,B99990002,B99990003) of lines:
Mainlist=[
['B99990001 1 2 3 4','B99990001 1 3 3 4'],#sublist1 has B99990001
['B99990002 1 2 3 4','B99990002 1 3 3 4'],#sublist2 has B99990002
['B99990002 1 2 3 4','B99990002 1 3 3 4'] #sublist3 has B99990002
]
I hope, My question is understandable. So If someones know could you help me out of this.
Thanking you in advance
SEE HERE MY REAL EXAMPLE:
import os
import re
pdbPathAndName = ['/Users/Mahesh/Documents/MAHESH_INTERNSHIP_2014 /ENZOWP2/2WC5_090715_170128/E3P/E3P.B99990001.pdb','/Users/Mahesh/Documents/MAHESH_INTERNSHIP_2014/ENZOWP2/2WC5_090715_170128/E3P/E3P.B99990002.pdb']
''' /Users/Mahesh/Documents/MAHESH_INTERNSHIP_2014/ENZOWP2/2WC5_090715_170128/E3P/E3P.B99990001.pdb=[
'ATOM 138 SG CYS 19 4.499 4.286 8.260 1.00 71.96 S',
'ATOM 397 SG CYS 50 14.897 3.238 9.338 1.00 34.60 S',
'ATOM 424 SG CYS 54 5.649 5.914 8.639 1.00 42.68 S',
'ATOM 774 SG CYS 97 12.114 -6.864 23.897 1.00 62.23 S',
'ATOM 865 SG CYS 108 15.200 3.910 11.227 1.00 54.49 S' ]
/Users/Mahesh/Documents/MAHESH_INTERNSHIP_2014/ENZOWP2/2WC5_090715_170128/E3P/E3P.B99990002.pdb=[
'ATOM 929 SG CYS 117 13.649 -6.894 22.589 1.00106.90 S',
'ATOM 138 SG CYS 19 4.499 4.286 8.260 1.00 71.96 S',
'ATOM 397 SG CYS 50 14.897 3.238 9.338 1.00 34.60 S',
'ATOM 424 SG CYS 54 5.649 5.914 8.639 1.00 42.68 S',
'ATOM 774 SG CYS 97 12.114 -6.864 23.897 1.00 62.23 S',
'ATOM 865 SG CYS 108 15.200 3.910 11.227 1.00 54.49 S',
'ATOM 929 SG CYS 117 13.649 -6.894 22.589 1.00106.90 S' ] '''
for path in pdbPathAndName:
f = open(path, 'r').readlines()
f = map(lambda x: x.strip(), f)
for line in f:
if "SG" in line and line.endswith("S"):
print (path.split("/")[-1] + "_" + re.split('\s+', line)[1] + ":" + re.split('\s+', line)[5] + ":" +re.split('\s+', line)[6] + ":" + re.split('\s+', line)[7])
#PRINTED OUTPUT
'''E3P.B99990001.pdb_138:6.923:0.241:6.116
E3P.B99990001.pdb_397:15.856:3.506:8.144
E3P.B99990001.pdb_424:8.558:1.315:6.627
E3P.B99990001.pdb_774:14.204:-5.490:24.812
E3P.B99990001.pdb_865:15.545:4.258:10.007
E3P.B99990001.pdb_929:16.146:-6.081:24.770
E3P.B99990002.pdb_138:4.499:4.286:8.260
E3P.B99990002.pdb_397:14.897:3.238:9.338
E3P.B99990002.pdb_424:5.649:5.914:8.639
E3P.B99990002.pdb_774:12.114:-6.864:23.897
E3P.B99990002.pdb_865:15.200:3.910:11.227
E3P.B99990002.pdb_929:13.649:-6.894:22.589'''
#MY EXPECTED OUTPUT
''' MainlIst=[
['E3P.B99990001.pdb_138:6.923:0.241:6.116'
'E3P.B99990001.pdb_397:15.856:3.506:8.144'
'E3P.B99990001.pdb_424:8.558:1.315:6.627'
'E3P.B99990001.pdb_774:14.204:-5.490:24.812'
'E3P.B99990001.pdb_865:15.545:4.258:10.007'
'E3P.B99990001.pdb_929:16.146:-6.081:24.770']#sublist1
['E3P.B99990002.pdb_138:4.499:4.286:8.260'
'E3P.B99990002.pdb_397:14.897:3.238:9.338'
'E3P.B99990002.pdb_424:5.649:5.914:8.639'
'E3P.B99990002.pdb_774:12.114:-6.864:23.897'
'E3P.B99990002.pdb_929:13.649:-6.894:22.589']#sublist2
]'''
#then use thes sublists to make combinations
for sublists in mainlist:
Combinatedlist=map(dict,itertools.combinations(sublists.iteritems(), 2))
#since it is sublist there wont be any crossing between sublist1 and sublist2 while doing combinations
#but still I didnt get proper result if you can then suggest me your ways
Hi guys I got an answer for this by just including particular pattern between each blogs and spitted based on the same to make sub lists then made a combination out of it
My code:
import fileinput
import os
import re
import itertools
import math
import sys
pdbPathAndName = ['/Users/Mahesh/Documents/MAHESH_INTERNSHIP_2014/ENZOWP2/2WC5_090715_170128/E3P/E3P.B99990001.pdb','/Users/Mahesh/Documents/MAHESH_INTERNSHIP_2014/ENZOWP2/2WC5_090715_170128/E3P/E3P.B99990002.pdb']
ATOM_COORDINATE=[]
for path in pdbPathAndName:
f = open(path, 'r').readlines()
f = map(lambda x: x.strip(), f)
for line in f:
if "SG" in line and line.endswith("S"):
ATOM_COORDINATE.append(path.split("/")[-1] + "_" + re.split('\s+', line)[1] + ":" + re.split('\s+', line)[5] + ":" +re.split('\s+', line)[6] + ":" + re.split('\s+', line)[7])
ATOM_COORDINATE.append("foo")
#Making Mainlist with sublists by splitting "foo" pattern
sub = []
for item in ATOM_COORDINATE:
if item == 'foo':
ATOM_COORDINATE.append(sub)
sub = []
else:
sub.append(item)
#Making combinations out of sublists
COMBINATION=[]
for sublists in sub:
for L in range(2, len(sublists), 4):
for subset in itertools.combinations(sublists, L):
COMBINATION.append(subset)
OUTPUT:
MainlistWithSublists:
[['E3P.B99990001.pdb_138:6.923:0.241:6.116', 'E3P.B99990001.pdb_397:15.856:3.506:8.144', 'E3P.B99990001.pdb_424:8.558:1.315:6.627', 'E3P.B99990001.pdb_774:14.204:-5.490:24.812', 'E3P.B99990001.pdb_865:15.545:4.258:10.007', 'E3P.B99990001.pdb_929:16.146:-6.081:24.770'], ['E3P.B99990002.pdb_138:4.499:4.286:8.260', 'E3P.B99990002.pdb_397:14.897:3.238:9.338', 'E3P.B99990002.pdb_424:5.649:5.914:8.639', 'E3P.B99990002.pdb_774:12.114:-6.864:23.897', 'E3P.B99990002.pdb_865:15.200:3.910:11.227', 'E3P.B99990002.pdb_929:13.649:-6.894:22.589']]
Combination out of sublists:
[('E3P.B99990001.pdb_138:6.923:0.241:6.116', 'E3P.B99990001.pdb_397:15.856:3.506:8.144'), ('E3P.B99990001.pdb_138:6.923:0.241:6.116', 'E3P.B99990001.pdb_424:8.558:1.315:6.627'), ('E3P.B99990001.pdb_138:6.923:0.241:6.116', 'E3P.B99990001.pdb_774:14.204:-5.490:24.812'), ('E3P.B99990001.pdb_138:6.923:0.241:6.116', 'E3P.B99990001.pdb_865:15.545:4.258:10.007'), ('E3P.B99990001.pdb_138:6.923:0.241:6.116', 'E3P.B99990001.pdb_929:16.146:-6.081:24.770'), ('E3P.B99990001.pdb_397:15.856:3.506:8.144', 'E3P.B99990001.pdb_424:8.558:1.315:6.627'), ('E3P.B99990001.pdb_397:15.856:3.506:8.144', 'E3P.B99990001.pdb_774:14.204:-5.490:24.812'), ('E3P.B99990001.pdb_397:15.856:3.506:8.144', 'E3P.B99990001.pdb_865:15.545:4.258:10.007'), ('E3P.B99990001.pdb_397:15.856:3.506:8.144', 'E3P.B99990001.pdb_929:16.146:-6.081:24.770'), ('E3P.B99990001.pdb_424:8.558:1.315:6.627', 'E3P.B99990001.pdb_774:14.204:-5.490:24.812'), ('E3P.B99990001.pdb_424:8.558:1.315:6.627', 'E3P.B99990001.pdb_865:15.545:4.258:10.007'), ('E3P.B99990001.pdb_424:8.558:1.315:6.627', 'E3P.B99990001.pdb_929:16.146:-6.081:24.770'), ('E3P.B99990001.pdb_774:14.204:-5.490:24.812', 'E3P.B99990001.pdb_865:15.545:4.258:10.007'), ('E3P.B99990001.pdb_774:14.204:-5.490:24.812', 'E3P.B99990001.pdb_929:16.146:-6.081:24.770'), ('E3P.B99990001.pdb_865:15.545:4.258:10.007', 'E3P.B99990001.pdb_929:16.146:-6.081:24.770'), ('E3P.B99990002.pdb_138:4.499:4.286:8.260', 'E3P.B99990002.pdb_397:14.897:3.238:9.338'), ('E3P.B99990002.pdb_138:4.499:4.286:8.260', 'E3P.B99990002.pdb_424:5.649:5.914:8.639'), ('E3P.B99990002.pdb_138:4.499:4.286:8.260', 'E3P.B99990002.pdb_774:12.114:-6.864:23.897'), ('E3P.B99990002.pdb_138:4.499:4.286:8.260', 'E3P.B99990002.pdb_865:15.200:3.910:11.227'), ('E3P.B99990002.pdb_138:4.499:4.286:8.260', 'E3P.B99990002.pdb_929:13.649:-6.894:22.589'), ('E3P.B99990002.pdb_397:14.897:3.238:9.338', 'E3P.B99990002.pdb_424:5.649:5.914:8.639'), ('E3P.B99990002.pdb_397:14.897:3.238:9.338', 'E3P.B99990002.pdb_774:12.114:-6.864:23.897'), ('E3P.B99990002.pdb_397:14.897:3.238:9.338', 'E3P.B99990002.pdb_865:15.200:3.910:11.227'), ('E3P.B99990002.pdb_397:14.897:3.238:9.338', 'E3P.B99990002.pdb_929:13.649:-6.894:22.589'), ('E3P.B99990002.pdb_424:5.649:5.914:8.639', 'E3P.B99990002.pdb_774:12.114:-6.864:23.897'), ('E3P.B99990002.pdb_424:5.649:5.914:8.639', 'E3P.B99990002.pdb_865:15.200:3.910:11.227'), ('E3P.B99990002.pdb_424:5.649:5.914:8.639', 'E3P.B99990002.pdb_929:13.649:-6.894:22.589'), ('E3P.B99990002.pdb_774:12.114:-6.864:23.897', 'E3P.B99990002.pdb_865:15.200:3.910:11.227'), ('E3P.B99990002.pdb_774:12.114:-6.864:23.897', 'E3P.B99990002.pdb_929:13.649:-6.894:22.589'), ('E3P.B99990002.pdb_865:15.200:3.910:11.227', 'E3P.B99990002.pdb_929:13.649:-6.894:22.589')]
Thanks to all
If you want to have the exact same output:
from collections import OrderedDict
d = OrderedDict()
with open('file.txt') as f:
for line in f:
splitted = line.strip().split()
key = splitted[0]
if key not in d:
d[key] = []
d[key].append(' '.join( splitted[1:] ))
mainList = [ [key + ' ' + item for item in d[key] ] for key in d ]
print mainList
Output:
[['B99990001 1 2 3 4', 'B99990001 1 3 3 4'],
['B99990002 1 2 3 4', 'B99990002 1 3 3 4'],
['B99990003 1 2 3 4', 'B99990003 1 3 3 4']]
If you can, just use a dictionary:
from collections import defaultdict
s = """B99990001 1 2 3 4
B99990001 1 3 3 4
B99990002 1 2 3 4
B99990002 1 3 3 4
B99990003 1 2 3 4
B99990003 1 3 3 4"""
d = defaultdict(list)
for line in s.split('\n'):
index, values = line.split(maxsplit=1)
d[index].append(values)
Output (dictionary d):
d = {
'B99990003': ['1 2 3 4', '1 3 3 4'],
'B99990001': ['1 2 3 4', '1 3 3 4'],
'B99990002': ['1 2 3 4', '1 3 3 4'],
}
If you really need to use a list of lists instead of a dict, you can just convert this back to a list:
l = [['%s %s' % (index, value) for value in d[index]] for index in d]
You can sort it using sorted(l) if you prefer a sorted version.
Related
I have the following data frame:
id my_year my_month waiting_time target
001 2018 1 95 1
002 2018 1 3 3
003 2018 1 4 0
004 2018 1 40 1
005 2018 2 97 1
006 2018 2 3 3
007 2018 3 4 0
008 2018 3 40 1
I want to groupby my_year and my_month, then in each group I want to compute the my_rate based on
(# of records with waiting_time <= 90 and target = 1)/ total_records in the group
i.e. I am expecting output like:
my_year my_month my_rate
2018 1 0.25
2018 2 0.0
2018 3 0.5
I wrote the following code to compute the desired value my_rate:
def my_rate(data):
waiting_time_list = data['waiting_time']
target_list = data['target']
total = len(data)
my_count = 0
for i in range(len(data)):
if total_waiting_time_list[i] <= 90 and target_list[i] == 1:
my_count += 1
rate = float(my_count)/float(total)
return rate
df.groupby(['my_year','my_month']).apply(my_rate)
However, I got the following error:
KeyError 0
KeyErrorTraceback (most recent call last)
<ipython-input-29-5c4399cefd05> in <module>()
17
---> 18 df.groupby(['my_year','my_month']).apply(my_rate)
/opt/conda/envs/python2/lib/python2.7/site-packages/pandas/core/groupby.pyc in apply(self, func, *args, **kwargs)
714 # ignore SettingWithCopy here in case the user mutates
715 with option_context('mode.chained_assignment', None):
--> 716 return self._python_apply_general(f)
717
718 def _python_apply_general(self, f):
/opt/conda/envs/python2/lib/python2.7/site-packages/pandas/core/groupby.pyc in _python_apply_general(self, f)
718 def _python_apply_general(self, f):
719 keys, values, mutated = self.grouper.apply(f, self._selected_obj,
--> 720 self.axis)
721
722 return self._wrap_applied_output(
/opt/conda/envs/python2/lib/python2.7/site-packages/pandas/core/groupby.pyc in apply(self, f, data, axis)
1727 # group might be modified
1728 group_axes = _get_axes(group)
-> 1729 res = f(group)
1730 if not _is_indexed_like(res, group_axes):
1731 mutated = True
<ipython-input-29-5c4399cefd05> in conversion_rate(data)
8 #print total_waiting_time_list[i], target_list[i]
9 #print i, total_waiting_time_list[i], target_list[i]
---> 10 if total_waiting_time_list[i] <= 90:# and target_list[i] == 1:
11 convert_90_count += 1
12 #print 'convert ', convert_90_count
/opt/conda/envs/python2/lib/python2.7/site-packages/pandas/core/series.pyc in __getitem__(self, key)
599 key = com._apply_if_callable(key, self)
600 try:
--> 601 result = self.index.get_value(self, key)
602
603 if not is_scalar(result):
/opt/conda/envs/python2/lib/python2.7/site-packages/pandas/core/indexes/base.pyc in get_value(self, series, key)
2426 try:
2427 return self._engine.get_value(s, k,
-> 2428 tz=getattr(series.dtype, 'tz', None))
2429 except KeyError as e1:
2430 if len(self) > 0 and self.inferred_type in ['integer', 'boolean']:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value (pandas/_libs/index.c:4363)()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_value (pandas/_libs/index.c:4046)()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc (pandas/_libs/index.c:5085)()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item (pandas/_libs/hashtable.c:13913)()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item (pandas/_libs/hashtable.c:13857)()
KeyError: 0
Any idea what I did wrong here? And how do I fix it? Thanks!
I believe better is use mean of boolean mask per groups:
def my_rate(x):
return ((x['waiting_time'] <= 90) & (x['target'] == 1)).mean()
df = df.groupby(['my_year','my_month']).apply(my_rate).reset_index(name='my_rate')
print (df)
my_year my_month my_rate
0 2018 1 0.25
1 2018 2 0.00
2 2018 3 0.50
Any idea what I did wrong here?
Problem is waiting_time_list and target_list are not lists, but Series:
waiting_time_list = data['waiting_time']
target_list = data['target']
print (type(waiting_time_list))
<class 'pandas.core.series.Series'>
print (type(target_list))
<class 'pandas.core.series.Series'>
So if want indexing it failed, because in second group are indices 4,5, not 0,1.
if waiting_time_list[i] <= 90 and target_list[i] == 1:
For avoid it is possible convert Series to list:
waiting_time_list = data['waiting_time'].tolist()
target_list = data['target'].tolist()
I have a dataframe(df1) as following:
datetime m d 1d 2d 3d
2014-01-01 1 1 2 2 3
2014-01-02 1 2 3 4 3
2014-01-03 1 3 1 2 3
...........
2014-12-01 12 1 2 2 3
2014-12-31 12 31 2 2 3
Also I have another dataframe(df2) as following:
datetime m d
2015-01-02 1 2
2015-01-03 1 3
...........
2015-12-01 12 1
2015-12-31 12 31
I want to merge the 1d 2d 3d columns value of df1 to df2.
There are two conditions:
(1) only m and d are the same in both df1 and df2 can merge.
(2) if the index of df2 index % 30 ==0 don't merge, the value of 1d 2d 3d of these index can be Nan.
I mean I want the new dataframe of df2 like as following:
datetime m d 1d 2d 3d
2015-01-02 1 2 Nan Nan Nan
2015-01-03 1 3 1 2 3
...........
2015-12-01 12 1 2 2 3
2015-12-31 12 31 2 2 3
Thanks in advance!
I think you need add NaNs by loc and then merge with left join:
np.random.seed(10)
N = 365
rng = pd.date_range('2015-01-01', periods=N)
df_tr_2014 = pd.DataFrame(np.random.randint(10, size=(N, 3)), index=rng).reset_index()
df_tr_2014.columns = ['datetime','7d','15d','20d']
df_tr_2014.insert(1,'month', df_tr_2014['datetime'].dt.month)
df_tr_2014.insert(2,'day_m', df_tr_2014['datetime'].dt.day)
#print (df_tr_2014.head())
N = 366
rng = pd.date_range('2016-01-01', periods=N)
df_te = pd.DataFrame(index=rng)
df_te['month'] = df_te.index.month
df_te['day_m'] = df_te.index.day
df_te = df_te.reset_index()
#print (df_te.tail())
df2 = df_te.copy()
df1 = df_tr_2014.copy()
df1 = df1.set_index('datetime')
df1.index += pd.offsets.DateOffset(years=1)
#correct 29 February
y = df1.index[0].year
df1 = df1.reindex(pd.date_range(pd.datetime(y,1,1), pd.datetime(y,12,31)))
idx = df1.index[(df1.index.month == 2) & (df1.index.day == 29)]
df1.loc[idx, :] = df1.loc[idx - pd.Timedelta(1, unit='d'), :].values
df1.loc[idx, 'day_m'] = idx.day
df1[['month','day_m']] = df1[['month','day_m']].astype(int)
df1[['7d','15d', '20d']] = df1[['7d','15d', '20d']].astype(float)
df1.loc[np.arange(len(df1.index)) % 30 == 0, ['7d','15d','20d']] = 0
df1 = df1.reset_index()
print (df1.iloc[57:62])
index month day_m 7d 15d 20d
57 2016-02-27 2 27 2.0 0.0 1.0
58 2016-02-28 2 28 2.0 3.0 5.0
59 2016-02-29 2 29 2.0 3.0 5.0
60 2016-03-01 3 1 0.0 0.0 0.0
61 2016-03-02 3 2 7.0 6.0 9.0
Why don't you just remove the rows in df1 that don't match (m, d) pairs in df2?
df_new = df2.drop(df2[(not ((df2.m == df1.m) & (df2.n == df1.n)).any()) or (df2.index % 30 == 0)].index)
Or something along those lines.
Link to a related answer.
I'm not enormously familiar with Pandas and have not tested the above example.
df_te is df2
df_tr_2014 is df1
7d 15d 20 is 1d 2d 3d respectively in question. size_df_te is the length of df_te, month and day_m are m, d in df2
df_te['7d'] = 0
df_te['15d'] = 0
df_te['20d'] = 0
mj = 0
dj = 0
for i in range(size_df_te):
if i%30 != 0:
m = df_te.loc[i,'month']
d = df_te.loc[i,'day_m']
if (m== 2) & (d == 29):
m = 2
d = 28
dk_7 = df_tr_2014.loc[(df_tr_2014['month']==m) & (df_tr_2014['day_m']==d)]['7d']
dk_15 = df_tr_2014.loc[(df_tr_2014['month']==m) & (df_tr_2014['day_m']==d)]['15d']
dk_20 = df_tr_2014.loc[(df_tr_2014['month']==m) & (df_tr_2014['day_m']==d)]['20d']
df_te.loc[i,'7d'] = float(dk_7)
df_te.loc[i,'15d'] = float(dk_15)
df_te.loc[i,'20d'] = float(dk_20)
EDIT:
Sample data:
np.random.seed(10)
N = 365
rng = pd.date_range('2014-01-01', periods=N)
df_tr_2014 = pd.DataFrame(np.random.randint(10, size=(N, 3)), index=rng).reset_index()
df_tr_2014.columns = ['datetime','7d','15d','20d']
df_tr_2014.insert(1,'month', df_tr_2014['datetime'].dt.month)
df_tr_2014.insert(2,'day_m', df_tr_2014['datetime'].dt.day)
#print (df_tr_2014.head())
N = 365
rng = pd.date_range('2015-01-01', periods=N)
df_te = pd.DataFrame(index=rng)
df_te['month'] = df_te.index.month
df_te['day_m'] = df_te.index.day
df_te = df_te.reset_index()
#print (df_te.head())
I have the following Data Frame named: mydf:
A B
0 3de (1ABS) Adiran
1 3SA (SDAS) Adel
2 7A (ASA) Ronni
3 820 (SAAa) Emili
I want to remove the " (xxxx)" and keeps the values in column A , so the dataframe (mydf) will look like:
A B
0 3de Adiran
1 3SA Adel
2 7A Ronni
3 820 Emili
I have tried :
print mydf['A'].apply(lambda x: re.sub(r" \(.+\)", "", x) )
but then I get a Series object back and not a dataframe object.
I have also tried to use replace:
df.replace([' \(.*\)'],[""], regex=True), But it didn't change anything.
What am I doing wrong?
Thank you!
you can use str.split() method:
In [3]: df.A = df.A.str.split('\s+\(').str[0]
In [4]: df
Out[4]:
A B
0 3de Adiran
1 3SA Adel
2 7A Ronni
3 820 Emili
or using str.extract() method:
In [9]: df.A = df.A.str.extract(r'([^\(\s]*)', expand=False)
In [10]: df
Out[10]:
A B
0 3de Adiran
1 3SA Adel
2 7A Ronni
3 820 Emili
I have a input file like this:
j,z,b,bsy,afj,upz,343,13,ruhwd
u,i,a,dvp,ibt,dxv,154,00,adsif
t,a,a,jqj,dtd,yxq,540,49,kxthz
j,z,b,bsy,afj,upz,343,13,ruhwd
u,i,a,dvp,ibt,dxv,154,00,adsif
t,a,a,jqj,dtd,yxq,540,49,kxthz
c,u,g,nfk,ekh,trc,085,83,xppnl
For every unique value of Column1, I need to find out the sum of column7
Similarly, for every unique value of Column2, I need to find out the sum of column7
Output for 1 should be like:
j,686
u,308
t,98
c,83
Output for 2 should be like:
z,686
i,308
a,98
u,83
I am fairly new in Python. How can I achieve the above?
This could be done using Python's Counter and csv library as follows:
from collections import Counter
import csv
c1 = Counter()
c2 = Counter()
with open('input.csv') as f_input:
for cols in csv.reader(f_input):
col7 = int(cols[6])
c1[cols[0]] += col7
c2[cols[1]] += col7
print "Column 1"
for value, count in c1.iteritems():
print '{},{}'.format(value, count)
print "\nColumn 2"
for value, count in c2.iteritems():
print '{},{}'.format(value, count)
Giving you the following output:
Column 1
c,85
j,686
u,308
t,1080
Column 2
i,308
a,1080
z,686
u,85
A Counter is a type of Python dictionary that is useful for counting items automatically. c1 holds all of the column 1 entries and c2 holds all of the column 2 entries. Note, Python numbers lists starting from 0, so the first entry in a list is [0].
The csv library loads each line of the file into a list, with each entry in the list representing a different column. The code takes column 7 (i.e. cols[6]) and converts it into an integer, as all columns are held as strings. It is then added to the counter using either the column 1 or 2 value as the key. The result is two dictionaries holding the totaled counts for each key.
You can use pandas:
df = pd.read_csv('my_file.csv', header=None)
print(df.groupby(0)[6].sum())
print(df.groupby(1)[6].sum())
Output:
0
c 85
j 686
t 1080
u 308
Name: 6, dtype: int64
1
a 1080
i 308
u 85
z 686
Name: 6, dtype: int64
The data frame should look like this:
print(df.head())
Output:
0 1 2 3 4 5 6 7 8
0 j z b bsy afj upz 343 13 ruhwd
1 u i a dvp ibt dxv 154 0 adsif
2 t a a jqj dtd yxq 540 49 kxthz
3 j z b bsy afj upz 343 13 ruhwd
4 u i a dvp ibt dxv 154 0 adsif
You can also use your own names for the columns. Like c1, c2, ... c9:
df = pd.read_csv('my_file.csv', index_col=False, names=['c' + str(x) for x in range(1, 10)])
print(df)
Output:
c1 c2 c3 c4 c5 c6 c7 c8 c9
0 j z b bsy afj upz 343 13 ruhwd
1 u i a dvp ibt dxv 154 0 adsif
2 t a a jqj dtd yxq 540 49 kxthz
3 j z b bsy afj upz 343 13 ruhwd
4 u i a dvp ibt dxv 154 0 adsif
5 t a a jqj dtd yxq 540 49 kxthz
6 c u g nfk ekh trc 85 83 xppnl
Now, group by column 1 c1 or column c2 and sum up column 7 c7:
print(df.groupby(['c1'])['c7'].sum())
print(df.groupby(['c2'])['c7'].sum())
Output:
c1
c 85
j 686
t 1080
u 308
Name: c7, dtype: int64
c2
a 1080
i 308
u 85
z 686
Name: c7, dtype: int64
SO isn't supposed to be a code writing service, but I had a few minutes. :) Without Pandas you can do it with the CSV module;
import csv
def sum_to(results, key, add_value):
if key not in results:
results[key] = 0
results[key] += int(add_value)
column1_results = {}
column2_results = {}
with open("input.csv", 'rt') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
sum_to(column1_results, row[0], row[6])
sum_to(column2_results, row[1], row[6])
print column1_results
print column2_results
Results:
{'c': 85, 'j': 686, 'u': 308, 't': 1080}
{'i': 308, 'a': 1080, 'z': 686, 'u': 85}
Your expected results don't seem to match the math that Mike's answer and mine got using your spec. I'd double check that.
I am stuck at a failry simple looping exercise through lists and getting error "TypeError: 'list' object is not callable".
I have three lists with n number of records. I want to write first record from all lists in the same line and want to repeat this procedure for n number of records, it will result in n number of lines. Following are lists that I want to use:
lst1 = ['1','2','4','5','3']
lst2 = ['3','4','3','4','3']
lst3 = ['0.52','0.91','0.18','0.42','0.21']
istring=""
lst=0
for i in range(0,10): # range is simply upper limit of number of records in lists
entry = lst1(lst)
istring = istring + entry.rjust(11) # first entry from each list will be cat here
lst=lst+1
Any startup would be really helpful.
This works for any size of lists:
for i in zip(lst1, lst2, lst3):
for j in i:
print j.rjust(11),
print
1 3 0.52
2 4 0.91
4 3 0.18
5 4 0.42
3 3 0.21
>>> lst1 = ['1','2','4','5','3']
>>> lst2 = ['3','4','3','4','3']
>>> lst3 = ['0.52','0.91','0.18','0.42','0.21']
>>> a = zip(lst1, lst2, lst3)
>>> istring = ""
>>> for entry in a:
... istring += entry[0].rjust(11)
... istring += entry[1].rjust(11)
... istring += entry[2].rjust(11) + "\n"
...
>>> print istring
1 3 0.52
2 4 0.91
4 3 0.18
5 4 0.42
3 3 0.21
Try entry = lst1[lst] instead of entry = lst1(lst)
() usually denotes calling a function, whereas
[] usually denotes accessing an element of something.
A list is not a function.
Also, while you can keep your own index, a for loop makes this unnecessary
x = [1,2,3,4,5,7,9,11,13,15]
y = [2,4,6,8,10,12,14,16,18,20]
z = [3,4,5,6,7,8,9,10,11,12]
for i in range(0,10):
print x[i], y[i], z[i]
1 2 3
2 4 4
3 6 5
4 8 6
5 10 7
7 12 8
9 14 9
11 16 10
13 18 11
15 20 12