Pivot Table in Pandas Count aggregate() - python-2.7

I am trying to take the count of 'STAGE' occurrence based on project, I used np.size as aggfunc but it return number of occurrence including the project, My count value become double if expected count is 3 means it return 6
I used the below code
df = pd.pivot_table(data_frame, index=['Project'],columns=['Stage'], aggfunc=np.size, fill_value=0)

You need aggregate function len:
print (data_frame)
Project Stage
0 an ip
1 cfc pe
2 an ip
3 ap pe
4 cfc pe
5 an ip
6 cfc ip
df = pd.pivot_table(data_frame,
index='Project',
columns='Stage',
aggfunc=len,
fill_value=0)
print (df)
Stage ip pe
Project
an 3 0
ap 0 1
cfc 1 2
Another solution with size:
df = pd.pivot_table(data_frame,
index='Project',
columns='Stage',
aggfunc='size',
fill_value=0)
print (df)
Stage ip pe
Project
an 3 0
ap 0 1
cfc 1 2
EDIT by comment:
import matplotlib.pyplot as plt
#all code
df.plot.bar()
plt.show()

Related

How to split dataframe or reorder dataframe by rows in pandas

I just want to clean the dataframe and analyse the dataframe. However, I got in trouble. I created a simple dataframe to illustrate it:
import pandas as pd
d = {'Resutls': ['IIL', 'pass','pass','IIH','pass','IIL','pass'], 'part':['None',1,2,'None',5,'None',4] }
df = pd.DataFrame(d)
the result looks like:
Resutls part
0 IIL None
1 pass 1
2 pass 2
3 IIH None
4 pass 5
5 IIL None
6 pass 4
There are some repeatable modules in the dataframe. I just want to reorder the dataframe by rows and drop the duplicated ones like:
Resutls part
0 IIL None
1 pass 1
2 pass 2
6 pass 4
3 IIH None
4 pass 5
or just split the dataframe into several sub dataframes:
Resutls part
0 IIL None
1 pass 1
2 pass 2
3 pass 4
Resutls part
0 IIH None
1 pass 5
This is just an easy example what I want to do. Actually I have a 4000-thousand rows dataframe, I tried to use reindex or df.iloc to do this. It is intuitive
for me but seems a little complicated to achieve. Is there any good way to do this? Please advise.
I think you need replace pass to NaNs and use forward filling, then sort by argsort and reorder by iloc:
df = df.iloc[df['Resutls'].mask(df['Resutls'].eq('pass')).ffill().argsort()]
print (df)
Resutls part
3 IIH None
4 pass 5
0 IIL None
1 pass 1
2 pass 2
5 IIL None
6 pass 4
Last remove repeating rows by boolean indexing:
df = df[~df['Resutls'].duplicated() | (df['Resutls'] == 'pass')]
print (df)
Resutls part
3 IIH None
4 pass 5
0 IIL None
1 pass 1
2 pass 2
6 pass 4
If want each DataFrame separately:
df['g'] = df['Resutls'].mask(df['Resutls'].eq('pass')).ffill()
df = df[~df['Resutls'].duplicated() | (df['Resutls'] == 'pass')]
print (df)
Resutls part g
0 IIL None IIL
1 pass 1 IIL
2 pass 2 IIL
3 IIH None IIH
4 pass 5 IIH
6 pass 4 IIL
dfs = {k:v.drop('g', axis=1) for k, v in df.groupby('g')}
#print (dfs)
print (dfs['IIH'])
Resutls part
3 IIH None
4 pass 5
print (dfs['IIL'])
Resutls part
0 IIL None
1 pass 1
2 pass 2
6 pass 4

Issue Calculating Mean of Grouped Data for entire range of dataset using Pandas

I have a data set of daily temperatures for which I want to calculate 20 year means. The data look like this:
1974 1 1 5.3 4.6 7.3 3.4
1974 1 2 3.3 7.2 4.5 6.5
...
2005 12 364 4.2 5.2 3.3 4.6
2005 12 365 3.1 5.5 2.6 6.8
There is no header in the file but the first column contains the year, the second column the month, and the third column the day of the year. The rest of the columns are temperature data.
I want to calculate the average temperature for each day over a period of 20 years. I thought the best way to do that would be to group the data by day and calculate the mean of each day for a specific range of years. Here is my code:
import pandas as pd
hist_fn = 'tmean_daily_1974_2005.txt'
twenty_year_fn = '20_yr_mean_1974_1993.txt'
start = 1974
end = 1993
hist_mean = pd.read_csv(hist_fn, sep='\s+', header=None)
# Limit dataframe to only the 20 years for which I want the mean calculated
interval_mean = hist_mean[(hist_mean[0]>=start) & (hist_mean[0]<=end)]
# Rename the first column to reflect what mean this file is displaying
interval_mean.iloc[:, 0] = ("%s-%s" % (start, end))
# Generate mean for each day spread across all the years in the dataframe
interval_mean.iloc[:, 3:] = interval_mean.groupby(2, as_index=False).mean().iloc[:, 2:]
# Write multiyear mean to txt
interval_mean.to_csv(twenty_year_fn, sep='\t', header=False, index=False)
The data set spans longer than 20 years and the method I used has worked for the first 20 year interval but gives me a (mostly) empty text file for any other set of years entered.
So when I use these inputs it works:
start = 1974
end = 1993
and it produces a file that looks like this:
1974-1993 1 1 4.33 5.25 6.84 3.67
1974-1993 1 2 7.23 6.22 5.65 6.23
...
1974-1993 12 364 5.12 4.34 5.21 2.16
1974-1993 12 365 4.81 5.95 3.56 6.78
but when I change the inputs to this:
start = 1975
end = 1994
it produces a .txt file with no temperatures:
1975-1994 1 1
1975-1994 1 2
...
1975-1994 12 364
1975-1994 12 365
I don't understand why this method works for the first 20 year interval but none of the subsequent intervals. Is it something to do with the way the data is organized or how it is being sliced?
Now when that's out of the way, we can talk about the problem you presented:
The strange behavior is due to the fact that pandas matches indices on assignment, and slicing preserves the original indices. That means that when setting
interval_mean.iloc[:, 3:] = interval_mean.groupby(2, as_index=False).mean().iloc[:, 2:]
Note that interval_mean.groupby(2, as_index=False).mean() has indices 0, ... , 30 (since as_index=False makes the groupby operation create new indices. Otherwise, it would have been the day number).On the other had, interval_mean has the original indices from hist_mean, meaning the first time (first 20 years) it has the indices 0, ..., ~20*365 and the second time is has indices starting from arround 20*365 and counting up.
This is a bit confusing at first, but pandas offer great documentation about it, and people quickly discover why it is so useful.
I'll to explain what happens with an example:
Assume we have the following DataFrame:
df = pd.DataFrame(np.reshape(np.random.randint(5, size=30), [-1,3]))
df
0 1 2
0 1 1 2
1 2 1 1
2 0 1 2
3 0 2 0
4 2 1 0
5 0 1 2
6 2 2 1
7 1 0 2
8 0 1 0
9 1 2 0
Note that the column names are 0,1,2 and the row names (the index) are 0, ..., 9.
When we preform groupby we obtain
df.groupby(0, as_index=False).mean()
0 1 2
0 0 1.250000 1.000000
1 1 1.000000 1.333333
2 2 1.333333 0.666667
(The index equals to the columns grouped by just because draw numbers between 0 to 2). Now, when will do assignments to df.loc, it will replace every cell by the corresponding cell in the assignee, if such cell exists. Otherwise, it will leave NA.
df.loc[:,:] = df.groupby(0, as_index=False).mean()
df
0 1 2
0 0.0 1.250000 1.000000
1 1.0 1.000000 1.333333
2 2.0 1.333333 0.666667
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
6 NaN NaN NaN
7 NaN NaN NaN
8 NaN NaN NaN
9 NaN NaN NaN
And when you write NA to csv, it leaves the cell blank.
The last piece of the puzzle is how interval_mean preserved the original indices, but this is because slicing preserves the original indices:
df[df[1] > 1]
0 1 2
3 0 2 0
6 2 2 1
9 1 2 0

Pandas - Create two columns - Simple, no?

Well hello everyone!
I want to create a (panda) dataset called df. This df panda form must contain "Id" and "Feature" columns. Any idea on how to do it?
I have done the following code but... the ## dictionaries are messy and put in random the two columns. I want "Id" as first column and "Feature" as a second one.
Thank you in advance! Have a loooong weekend!
df = DataFrame({'Feature': X["Feature"],'Id': X["Id"] })
From the pandas docs "If no columns are passed, the columns will be the sorted list of dict keys." I do this simple trick to arrange the columns. Just add "1", "2", etc. to beginning of your column names. For example:
>>>> df1 = pd.DataFrame({"Id":[1,2,3],"Feature":[5,6,7]})
>>>> df1
Feature Id
0 5 1
1 6 2
2 7 3
>>>> df2 = pd.DataFrame({"1Id":[1,2,3],"2Feature":[5,6,7]})
>>>> df2
1Id 2Feature
0 1 5
1 2 6
2 3 7
>>>> df2.columns = ["Id","Feature"]
>>>> df2
Id Feature
0 1 5
1 2 6
2 3 7
Now you have the order you wanted for printing or saving the DataFrame.
If this what you wanted?
import pandas as pd
data=["id","Feature"]
index=[1,2]
s = pd.Series(data,index=index)
df = pd.DataFrame(np.random.randn(2,2), index=index, columns=('id','features'))
The data frame :
>>> df['id']
1 0.254105
2 -0.132025
Name: id, dtype: float64
>>> df['features']
1 0.189972
2 2.262103
Name: features, dtype: float64

stop pd.DataFrame.from_csv() from converting integer index to date

pandas.DataFrame.from_csv(filename) seems to be converting my integer index into a date.
This is undesirable. How do I prevent this?
The code shown here is a toy version of a larger problem. In the larger problem, I am estimating and writing the parameters of statistical models for each zone for later use. I thought by using a pandas dataframe indexed by zone, I could easily read back the parameters. While pickle or some other format like json might solve this problem I'd like to see a pandas solution....except pandas is converting the zone number to a date.
#!/usr/bin/python
cache_file="./mydata.csv"
import numpy as np
import pandas as pd
zones = [1,2,3,8,9,10]
def create():
data = []
for z in zones:
info = {'m': int(10*np.random.rand()), 'n': int(10*np.random.rand())}
info.update({'zone':z})
data.append(info)
df = pd.DataFrame(data,index=zones)
print "about to write this data:"
print df
df.to_csv(cache_file)
def read():
df = pd.DataFrame.from_csv(cache_file)
print "read this data:"
print df
create()
read()
Sample output:
about to write this data:
m n zone
1 0 3 1
2 5 8 2
3 6 4 3
8 1 8 8
9 6 2 9
10 7 2 10
read this data:
m n zone
2013-12-01 0 3 1
2013-12-02 5 8 2
2013-12-03 6 4 3
2013-12-08 1 8 8
2013-12-09 6 2 9
2013-12-10 7 2 10
The CSV file looks OK, so the problem seems to be in reading not creating.
mydata.csv
,m,n,zone
1,0,3,1
2,5,8,2
3,6,4,3
8,1,8,8
9,6,2,9
10,7,2,10
I suppose this might be useful:
pd.__version__
0.12.0
Python version is python 2.7.5+
I want to record the zone as an index so I can easily pull out the corresponding
parameters later. How do I keep pandas.DataFrame.from_csv() from turning it into a date?
Reading pandas.DataFrame.from_csv? the parse_dates argument defaults to True. Set it to False.

Removing Percentages from a Data Frame

I have a dataframe that originated from an excel file. It has the usual headers above the columns but some of the columns have % signs in them which I want to remove.
Searching stackoverflow gives some nice code for removing percentages from matrices, Any way to edit values in a matrix in R?, which did not work when I tried to apply it to my dataframe
as.numeric(gsub("%", "", my.dataframe))
instead it just returns a string of "NA"s with a warning message explaining that they were introduced by coercion. When I applied,
gsub("%", "", my.dataframe))
I got the values in "c(...)" form, where the ... represent numbers followed by commas which was reproduced for every column that I had. No % was in evidence; if I could just put this back together ... I'd be cooking.
Any help greatfully received, thanks.
Based on #Arun's comment and imaging how your data.frame looks like:
> DF <- data.frame(X = paste0(1:5,'%'),
Y = paste0(2*(1:5),'%'),
Z = 3*(1:5), stringsAsFactors=FALSE )
> DF # this is how I imagine your data.frame looks like
X Y Z
1 1% 2% 3
2 2% 4% 6
3 3% 6% 9
4 4% 8% 12
5 5% 10% 15
> # Using #Arun's suggestion
> (DF2 <- data.frame(sapply(DF, function(x) as.numeric(gsub("%", "", x)))))
X Y Z
1 1 2 3
2 2 4 6
3 3 6 9
4 4 8 12
5 5 10 15
I added as.numeric in sapply call for the resulting cols to be numeric, if I don't use as.numeric the result will be factor. Check it out using sapply(DF2, class)