I am using this dataframe:
Fruit Date Name Number
Apples 10/6/2016 Bob 7
Apples 10/6/2016 Bob 8
Apples 10/6/2016 Mike 9
Apples 10/7/2016 Steve 10
Apples 10/7/2016 Bob 1
Oranges 10/7/2016 Bob 2
Oranges 10/6/2016 Tom 15
Oranges 10/6/2016 Mike 57
Oranges 10/6/2016 Bob 65
Oranges 10/7/2016 Tony 1
Grapes 10/7/2016 Bob 1
Grapes 10/7/2016 Tom 87
Grapes 10/7/2016 Bob 22
Grapes 10/7/2016 Bob 12
Grapes 10/7/2016 Tony 15
I would like to aggregate this by Name and then by Fruit to get a total number of Fruit per Name. For example:
Bob,Apples,16
I tried grouping by Name and Fruit but how do I get the total number of Fruit?
Use GroupBy.sum:
df.groupby(['Fruit','Name']).sum()
Out[31]:
Number
Fruit Name
Apples Bob 16
Mike 9
Steve 10
Grapes Bob 35
Tom 87
Tony 15
Oranges Bob 67
Mike 57
Tom 15
Tony 1
To specify the column to sum, use this: df.groupby(['Name', 'Fruit'])['Number'].sum()
Also you can use agg function,
df.groupby(['Name', 'Fruit'])['Number'].agg('sum')
If you want to keep the original columns Fruit and Name, use reset_index(). Otherwise Fruit and Name will become part of the index.
df.groupby(['Fruit','Name'])['Number'].sum().reset_index()
Fruit Name Number
Apples Bob 16
Apples Mike 9
Apples Steve 10
Grapes Bob 35
Grapes Tom 87
Grapes Tony 15
Oranges Bob 67
Oranges Mike 57
Oranges Tom 15
Oranges Tony 1
As seen in the other answers:
df.groupby(['Fruit','Name'])['Number'].sum()
Number
Fruit Name
Apples Bob 16
Mike 9
Steve 10
Grapes Bob 35
Tom 87
Tony 15
Oranges Bob 67
Mike 57
Tom 15
Tony 1
Both the other answers accomplish what you want.
You can use the pivot functionality to arrange the data in a nice table
df.groupby(['Fruit','Name'],as_index = False).sum().pivot('Fruit','Name').fillna(0)
Name Bob Mike Steve Tom Tony
Fruit
Apples 16.0 9.0 10.0 0.0 0.0
Grapes 35.0 0.0 0.0 87.0 15.0
Oranges 67.0 57.0 0.0 15.0 1.0
df.groupby(['Fruit','Name'])['Number'].sum()
You can select different columns to sum numbers.
A variation on the .agg() function; provides the ability to (1) persist type DataFrame, (2) apply averages, counts, summations, etc. and (3) enables groupby on multiple columns while maintaining legibility.
df.groupby(['att1', 'att2']).agg({'att1': "count", 'att3': "sum",'att4': 'mean'})
using your values...
df.groupby(['Name', 'Fruit']).agg({'Number': "sum"})
You can set the groupby column to index then using sum with level
df.set_index(['Fruit','Name']).sum(level=[0,1])
Out[175]:
Number
Fruit Name
Apples Bob 16
Mike 9
Steve 10
Oranges Bob 67
Tom 15
Mike 57
Tony 1
Grapes Bob 35
Tom 87
Tony 15
You could also use transform() on column Number after group by. This operation will calculate the total number in one group with function sum, the result is a series with the same index as original dataframe.
df['Number'] = df.groupby(['Fruit', 'Name'])['Number'].transform('sum')
df = df.drop_duplicates(subset=['Fruit', 'Name']).drop('Date', 1)
Then, you can drop the duplicate rows on column Fruit and Name. Moreover, you can drop the column Date by specifying axis 1 (0 for rows and 1 for columns).
# print(df)
Fruit Name Number
0 Apples Bob 16
2 Apples Mike 9
3 Apples Steve 10
5 Oranges Bob 67
6 Oranges Tom 15
7 Oranges Mike 57
9 Oranges Tony 1
10 Grapes Bob 35
11 Grapes Tom 87
14 Grapes Tony 15
# You could achieve the same result with functions discussed by others:
# print(df.groupby(['Fruit', 'Name'], as_index=False)['Number'].sum())
# print(df.groupby(['Fruit', 'Name'], as_index=False)['Number'].agg('sum'))
There is an official tutorial Group by: split-apply-combine talking about what you can do after group by.
If you want the aggregated column to have a custom name such as Total Number, Total etc. (all the solutions on here results in a dataframe where the aggregate column is named Number), use named aggregation:
df.groupby(['Fruit', 'Name'], as_index=False).agg(**{'Total Number': ('Number', 'sum')})
or (if the custom name doesn't need to have a white space in it):
df.groupby(['Fruit', 'Name'], as_index=False).agg(Total=('Number', 'sum'))
this is equivalent to SQL query:
SELECT Fruit, Name, sum(Number) AS Total
FROM df
GROUP BY Fruit, Name
Speaking of SQL, there's pandasql module that allows you to query pandas dataFrames in the local environment using SQL syntax. It's not part of Pandas, so will have to be installed separately.
#! pip install pandasql
from pandasql import sqldf
sqldf("""
SELECT Fruit, Name, sum(Number) AS Total
FROM df
GROUP BY Fruit, Name
""")
You can use dfsql
for your problem, it will look something like:
df.sql('SELECT fruit, sum(number) GROUP BY fruit')
https://github.com/mindsdb/dfsql
here is an article about it:
https://medium.com/riselab/why-every-data-scientist-using-pandas-needs-modin-bringing-sql-to-dataframes-3b216b29a7c0
You can use reset_index() to reset the index after the sum
df.groupby(['Fruit','Name'])['Number'].sum().reset_index()
or
df.groupby(['Fruit','Name'], as_index=False)['Number'].sum()
Related
I am trying to build a dynamic rank column that will update when a slicer is selected. There are 2 slicers on the page: RegionalManager and SalesManager
There is only one table, say tblSales. I have tried various combinations of RANKX but nothing seems to work. Can someone help me with this? Here is a sample data and scenarios.
RegionalManager SalesManager SalesPerson Sales Rank
Bill Patty John 20 6
Bill Patty Sally 10 7
Bill Patty Connie 30 4
Bill Connie Jim 40 3
Bill Connie Amanda 70 1
Zack Tracy Trevor 5 8
Zack Matt Breanna 25 5
Zack Mike Pam 45 2
If I filter on Bill the Rank should be this:
RegionalManager SalesManager SalesPerson Sales Rank
Bill Patty John 20 4
Bill Patty Sally 10 5
Bill Patty Connie 30 3
Bill Connie Jim 40 2
Bill Connie Amanda 70 1
If I filter on Bill and Connie the Rank should be this:
RegionalManager SalesManager SalesPerson Sales Rank
Bill Connie Jim 40 2
Bill Connie Amanda 70 1
Add a measure as follows:
Rank = RANKX(ALLSELECTED(tblSales), CALCULATE( SUM(tblSales[Sales])))
I have a data that looks like this
Date
Name
SurveyID
Score
Error
2022-02-17
Jack
10
95
Name
2022-02-17
Jack
10
95
Address
2022-02-16
Tom
9
100
2022-02-16
Carl
8
93
Zip
2022-02-16
Carl
8
93
Email
2022-02-15
Dan
7
72
Zip
2022-02-15
Dan
7
72
Email
2022-02-15
Dan
7
72
Name
2022-02-15
Dan
6
90
Phone
2022-02-14
Tom
5
98
Gender
I wanted to have a segmentation data using the avg. score per individual.
Segment
A: 98%-100%
B: 95%-97%
C: 90%-94%
D: 80%-89%
E: 0% -79%
I did an if else formula which is this:
ifelse(Score} >= 98,'A',ifelse({Score} >= 95,'B',ifelse({Score} >= 90,'C',ifelse({Score} >= 80,'D','E'))))
This is now the output of what I did:
Date
Name
SurveyID
Score
Error
Segement
2022-02-17
Jack
10
95
Name
B
2022-02-17
Jack
10
95
Address
B
2022-02-16
Tom
9
100
A
2022-02-16
Carl
8
93
Zip
C
2022-02-16
Carl
8
93
Email
C
2022-02-15
Dan
7
72
Zip
E
2022-02-15
Dan
7
72
Email
E
2022-02-15
Dan
7
72
Name
E
2022-02-15
Dan
6
90
Phone
C
2022-02-14
Tom
5
98
Gender
A
I realized that the calculation I did only applies for the score. I was expecting an output like this:
Name
Average Score
Total Survey
Segement
Jack
95
1
B
Tom
99
2
A
Carl
93
1
C
Dan
81
2
D
I have tried to create another calculated field for Average Score which is:
avgOver({Score}, [Name], PRE_AGG)
I believe I am missing a distinct count of survey IDs in that formula, that I do not know where to place. As for segmentation calculation, I cannot on my life figure that part out without getting aggregation errors on Quicksight. Please help, thank you.
Got the answer from Quicksight Community. Pasting it here.
For segmentation, you can use the calculated field which you created for average score .
avg_score = avgOver(Score,[Name],PRE_AGG)
Segment
ifelse
(
{avg_score}>= 98,'A',
{avg_score}>= 95,'B',
{avg_score}>= 90,'C',
{avg_score}>= 80,'D',
'E'
)
The survey id can be used to get the distinct count per individual.
How do I only sum the amounts that are checked on sheet2 for each name?
Sheet1
Column A
Tom
Susan
Sheet2
Column A Column B Column C
Tom 100 (un-checked)
Susan 150 (checked)
Susan 75 (un-checked)
Tom 25 (checked)
Susan 50 (checked)
Solved!
=SUMIFS(Sheet2!B1:B,Sheet2!A1:A,Sheet1!A1:A,Sheet2!C1:C,true)
Output:
Tom 25
Susan 200
=SUMIFS(M2:M,A2:A,"Susan",N2:N,true)
i have a dataframe like this :
Name one two
John A 20
John P 30
Alex B 40
David C 50
Harry A 60
Harry P 40
I want to add those rows where A and P are simultaneously occurring for the specific names such as
Name one two
John A+P 50
Alex B 40
David C 50
Harry A+P 100
I tried with sum function of row wise in pandas but didn't got output as in such form needed. Kindly help me out !
Use DataFrameGroupBy.agg with join and sum:
df = df.groupby('Name', sort=False, as_index=False).agg({'one':'+'.join, 'two':'sum'})
print (df)
Name one two
0 John A+P 50
1 Alex B 40
2 David C 50
3 Harry A+P 100
I've got dataframe like this :
Name Nationality Tall Age
John USA 190 24
Thomas French 194 25
Anton Malaysia 180 23
Chris Argentina 190 26
so let say i got incoming data structure like this. each element representing the data of each row. :
data = [{
'food':{'lunch':'Apple',
'breakfast':'Milk',
'dinner':'Meatball'},
'drink':{'favourite':'coke',
'dislike':'juice'}
},
..//and 3 other records
].
'data' is some variable that save predicted food and drink from my machine learning. There is more record(about 400k rows) but i process them by batch size (right now i process 2k data each iteration) through iteration. Expected result like:
Name Nationality Tall Age Lunch Breakfast Dinner Favourite Dislike
John USA 190 24 Apple Milk Meatball Coke Juice
Thomas French 194 25 ....
Anton Malaysia 180 23 ....
Chris Argentina 190 26 ....
Is there's an effective way to achive that dataframe? so far i've already tried to iterate the data variables and get the value of each predicted label. which its feels like that process took much time.
You need flatenning dictionaries first, create DataFrame and join to original:
data = [{
'a':{'lunch':'Apple',
'breakfast':'Milk',
'dinner':'Meatball'},
'b':{'favourite':'coke',
'dislike':'juice'}
},
{
'a':{'lunch':'Apple1',
'breakfast':'Milk1',
'dinner':'Meatball2'},
'b':{'favourite':'coke2',
'dislike':'juice3'}
},
{
'a':{'lunch':'Apple4',
'breakfast':'Milk5',
'dinner':'Meatball4'},
'b':{'favourite':'coke2',
'dislike':'juice4'}
},
{
'a':{'lunch':'Apple3',
'breakfast':'Milk8',
'dinner':'Meatball7'},
'b':{'favourite':'coke4',
'dislike':'juice1'}
}
]
#or use another solutions, both are nice
L = [{k: v for x in d.values() for k, v in x.items()} for d in data]
df1 = pd.DataFrame(L)
print (df1)
breakfast dinner dislike favourite lunch
0 Milk Meatball juice coke Apple
1 Milk1 Meatball2 juice3 coke2 Apple1
2 Milk5 Meatball4 juice4 coke2 Apple4
3 Milk8 Meatball7 juice1 coke4 Apple3
df2 = df.join(df1)
print (df2)
Name Nationality Tall Age breakfast dinner dislike favourite \
0 John USA 190 24 Milk Meatball juice coke
1 Thomas French 194 25 Milk1 Meatball2 juice3 coke2
2 Anton Malaysia 180 23 Milk5 Meatball4 juice4 coke2
3 Chris Argentina 190 26 Milk8 Meatball7 juice1 coke4
lunch
0 Apple
1 Apple1
2 Apple4
3 Apple3