Average of top 10 values in Power BI - powerbi

My data set looks like this:
Time Date ID Value
12 am 1st Oct 1001 100
12 am 2nd Oct 1001 203
12 am 3rd Oct 1001 403
.... ...... .... ....
11 pm 20th Oct 1001 566
12 am 1st Oct 1002 150
12 am 2nd Oct 1002 153
12 am 3rd Oct 1002 403
.... ...... .... ....
11 pm 10th Oct 1002 666
For each customer, I have 20 days of data for each hour. I need to calculate and show the average of top 10 values for each hour out of that 20 days for each customer.
The output will look like this...
Time ID Average
12am 1001 105
1am 1001 056
... .... ...
11pm 1001 298
12am 1002 456
1am 1002 856
... ... ...
11pm 1002 166
How can I do that using Power BI?

Related

Add a dynamic constant line based on column in powerbi

I am trying to plot a line chart in powerBI with a reference line based on another columns value. I have data that represents the journeys of different cars on different sections of road. I am plotting those journeys that travel over the same section of road. e.g. RoadId 10001.
Distance JourneyNum Speed ThresholdSpeed RoadId
1 10 50 60 10001
2 10 51 60 10001
3 10 52 60 10001
1 11 45 60 10001
2 11 46 60 10001
3 11 47 60 10001
7 12 20 30 10009
8 12 21 30 10009
9 12 22 30 10009
10 12 23 30 10009
So currently I have:
Distance on x-axis (Axis),
Speed on y-axis (Values),
JourneyNum as the Legend (Legend),
filter to roadId 10001
I want to also add the thresholdSpeed as a reference line or just as another line would do. Any help?
I don't think it's possible (yet) to pass a measure to a constant line, so you'll need a different approach.
One possibility is to reshape your data so that ThresholdSpeed appears as part of your Legend. You can do this in DAX like so:
Table2 =
VAR NewRows = SELECTCOLUMNS(Table1,
"Distance", Table1[Distance],
"JourneyNum", "Threshold",
"Speed", Table1[ThresholdSpeed],
"ThresholdSpeed", Table1[ThresholdSpeed],
"RoadId", Table1[RoadId])
RETURN UNION(Table1, DISTINCT(NewRows))
Which results in a table like this:
Distance JourneyNum Speed ThresholdSpeed RoadId
1 10 50 60 10001
2 10 51 60 10001
3 10 52 60 10001
1 11 45 60 10001
2 11 46 60 10001
3 11 47 60 10001
1 Threshold 60 60 10001
2 Threshold 60 60 10001
3 Threshold 60 60 10001
7 12 20 30 10009
8 12 21 30 10009
9 12 22 30 10009
10 12 23 30 10009
7 Threshold 30 30 10009
8 Threshold 30 30 10009
9 Threshold 30 30 10009
10 Threshold 30 30 10009
Then you make a line chart on this table instead:
Note: It's probably preferable to do this transformation in the query editor though so you don't have redundant tables.

Facebook Graph API: page_fans_online, localizing dates to where the business is located

I am trying to replicate the data that is used in "When your fans are online" section" of a business page's insights dashboard. I am using the following parameters in the /insights/page_fans_online api call which returns the data I am after:
parameters={'period':'day','since':'2018-10-20T07:00:00','until':'2018-10-21T07:00:00','access_token':page_token['access_token'][0]}
The data returned can be seen below, where:
end_time = end_time (based on the since & until dates in the parameters)
name = metric
apiHours = hour of day returned
localDate = localized date (applied manually)
localHours = - 6 hour offset to localize to Auckland/New Zealand (applied
manually to replicate what is seen on the insights dashboard.
fansOnline = number of unique page fans online during that hour
Data:
end_time name apiHours localDate localHours fansOnline
2018-10-21T07:00:00+0000 page_fans_online 0 2018-10-19 18 21
1 2018-10-19 19 29
2 2018-10-19 20 20
3 2018-10-19 21 18
4 2018-10-19 22 20
5 2018-10-19 23 15
6 2018-10-19 0 4
7 2018-10-19 1 6
8 2018-10-19 2 5
9 2018-10-19 3 8
10 2018-10-19 4 17
11 2018-10-19 5 19
12 2018-10-19 6 26
13 2018-10-19 7 24
14 2018-10-19 8 20
15 2018-10-19 9 22
16 2018-10-19 10 19
17 2018-10-19 11 22
18 2018-10-19 12 18
19 2018-10-19 13 18
20 2018-10-19 14 18
21 2018-10-19 15 18
22 2018-10-19 16 21
23 2018-10-19 17 28
It took a while to work out that the data returned when pulling page_fans_online using the parameters specified above is for Wednesday October 19th, for a New Zealand business page.
If we look at the last row in the data above:
end_time = 2018-10-21
apiHours = 23
localDate = 2018-10-19
localHours = 17
fansOnline = 28
It is saying on 2018-10-21 # 11 pm there were 28 unique fans online. This translates to , on 2018-10-19 # 5 pm there were 28 unique fans online when the dates and times are manually localized, (I worked the offset out by checking the "When your fans online" graphs on the page insights).
There is a -54 hour offset between 2018-10-21 11:00 pm and 2018-10-19 5:00 pm, and my question is, what is the logic used behind the end_time and hour of day returned by the page_fans_online insights metric and is there any info regarding how this should be localized depending on what country the business is located?
There is only a simple description of what page_fans_online is in the page/insights docs and says the hours are in PST/PDT but that does not help with localizing the date and hour of day:
https://developers.facebook.com/docs/graph-api/reference/v3.1/insights

Pandas - Finding percentage of values in each group

I have a csv file like below.
Beat,Hour,Month,Primary Type,COUNTER
111,10AM,Apr,ASSAULT,12
111,10AM,Apr,BATTERY,5
111,10AM,Apr,BURGLARY,1
111,10AM,Apr,CRIMINAL DAMAGE,4
111,10AM,Aug,MOTOR VEHICLE THEFT,2
111,10AM,Aug,NARCOTICS,1
111,10AM,Aug,OTHER OFFENSE,18
111,10AM,Aug,THEFT,38
Now I want to find the % of each Primary Type grouped by the first three columns. For eg, For Beat = 111, Hour=10AM, Month=Apr, %Assault=12/(12+5+1+4) * 100. Can anyone give a clue on how to do this using pandas?
You can using transform sum
df['New']=df.COUNTER/df.groupby(['Beat','Hour','Month']).COUNTER.transform('sum')*100
df
Out[575]:
Beat Hour Month Primary Type COUNTER New
0 111 10AM Apr ASSAULT 12 54.545455
1 111 10AM Apr BATTERY 5 22.727273
2 111 10AM Apr BURGLARY 1 4.545455
3 111 10AM Apr CRIMINAL DAMAGE 4 18.181818
4 111 10AM Aug MOTOR VEHICLE THEFT 2 3.389831
5 111 10AM Aug NARCOTICS 1 1.694915
6 111 10AM Aug OTHER OFFENSE 18 30.508475
7 111 10AM Aug THEFT 38 64.406780

A dynamic SAS program to consolidate dates of events that are nested within each other

Hello,
I want to write a dynamic program which helps me to flag the start and end dates of events that are nested within the consolidated dates that are present at the top of each Pt.ID in the attached example. I can easily do these if there is only one such consolidated period per Pt.ID. However, there could be more than one such consolidated periods per Pt. ID. (As shown for second Pt.ID, 1002). As shown in the example, the events that fall within the consolidated period/s are fagged as "Y" in the flag variable and if they don't fall within the consolidated period then they are flagged as "N" in this variable. How can I write a program that accounts for all of such consolidated periods per Pt.ID and then compare them with the dates for the rest of the events of a particular patient and flag events which fall within any of those consolidated periods?
Thank you.
So join the event records with the period records and calculate whether the event is within the period. Then you could take the MAX over all periods.
For example here is code for your sample that creates a binary 1/0 flag variable called INCLUDED.
data Sample;
infile datalines missover;
input Pt_ID Event_ID Category $ Start_Date : mmddyy10.
Start_Day End_date : mmddyy10. End_day Duration
;
format Start_date End_date mmddyy10.;
datalines;
1001 . Moderate 8/5/2016 256 9/3/2016 285 30
1001 1 Moderate 3/8/2016 106 3/16/2016 114 9
1001 2 Moderate 8/5/2016 256 8/14/2016 265 10
1001 3 Moderate 8/21/2016 272 8/24/2016 275 4
1001 4 Moderate 8/23/2016 274 9/3/2016 285 12
1002 . Severe 11/28/2016 13 12/19/2016 34 22
1002 . Severe 2/6/2017 83 2/28/2017 105 23
1002 1 Severe 11/28/2016 13 12/5/2016 20 8
1002 2 Severe 12/12/2016 27 12/19/2016 34 8
1002 3 Severe 1/9/2017 55 1/12/2017 58 4
1002 4 Severe 2/6/2017 83 2/13/2017 90 8
1002 5 Severe 2/20/2017 97 2/28/2017 105 9
1002 6 Severe 3/17/2017 122 3/24/2017 129 8
1002 7 Severe 5/4/2017 170 5/13/2017 179 10
1002 8 Severe 5/24/2017 190 5/30/2017 196 7
1002 9 Severe 6/9/2017 206 6/13/2017 210 5
;
proc sql ;
create table want as
select a.*
, max(b.start_date <= a.start_date and b.end_date >= a.end_date ) as Included
from sample a
left join sample b
on a.pt_id = b.pt_id and missing(b.event_id)
group by 1,2,3,4,5,6,7,8
order by a.pt_id, a.event_id, a.start_date , a.end_date
;
quit;

Stacking in pandas dataframe based on column name

I have following pandas Dataframe:
ID Year Jan_salary Jan_days Feb_salary Feb_days Mar_salary Mar_days
1 2016 4500 22 4200 18 4700 24
2 2016 3800 23 3600 19 4400 23
3 2016 5500 21 5200 17 5300 23
I want to convert this dataframe to following dataframe:
ID Year month salary days
1 2016 01 4500 22
1 2016 02 4200 18
1 2016 03 4700 24
2 2016 01 3800 23
2 2016 02 3600 19
2 2016 03 4400 23
3 2016 01 5500 21
3 2016 02 5200 17
3 2016 03 5300 23
I tried use pandas.DataFrame.stack but couldn't get the expected outcome.
I am using Python 2.7
Please guide me to reshape this Pandas dataframe.
Thanks.
df = df.set_index(['ID', 'Year'])
df.columns = df.columns.str.split('_', expand=True).rename('month', level=0)
df = df.stack(0).reset_index()
md = dict(Jan='01', Feb='02', Mar='03')
df.month = df.month.map(md)
df[['ID', 'Year', 'month', 'salary', 'days']]
I love pd.melt so that's what I used in this long-winded approach:
ldf = pd.melt(df,id_vars=['ID','Year'],
value_vars=['Jan_salary','Feb_salary','Mar_salary'],
var_name='month',value_name='salary')
rdf = pd.melt(df,id_vars=['ID','Year'],
value_vars=['Jan_days','Feb_days','Mar_days'],
value_name='days')
rdf.drop(['ID','Year','variable'],inplace=True,axis=1)
cdf = pd.concat([ldf,rdf],axis=1)
cdf['month'] = cdf['month'].str.replace('_salary','')
import calendar
def mapper(month_abbr):
# from http://stackoverflow.com/a/3418092/42346
d = {v: str(k).zfill(2) for k,v in enumerate(calendar.month_abbr)}
return d[month_abbr]
cdf['month'] = cdf['month'].apply(mapper)
Result:
>>> cdf
ID Year month salary days
0 1 2016 01 4500 22
1 2 2016 01 3800 23
2 3 2016 01 5500 21
3 1 2016 02 4200 18
4 2 2016 02 3600 19
5 3 2016 02 5200 17
6 1 2016 03 4700 24
7 2 2016 03 4400 23
8 3 2016 03 5300 23