I would like to know how to create the last column, Audit. To determine whether it is Included or Not Included the sum of any rows in "Average meeting per Day" within the same Employee/Day should match the "Actual Meeting" the employee attended.
I've taken this as an Example dataset -
Further, use Group By and Conditional column to have the Audit added.
Group by -
Conditional Column -
Result -
Code -
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8s3PS0msVNJRMgRiAz1TMGkEJBOVYnVwSBtjShuBJQzhZBJYOqQ0tRhZuwlcHqg9=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Day = _t, Emp_ID = _t, #"Actual Meeting" = _t, #"Average Meeting/Day" = _t, Emp_Name = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Day", type text}, {"Emp_ID", Int64.Type}, {"Actual Meeting", type number}, {"Average Meeting/Day", type number}, {"Emp_Name", type text}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Day", "Emp_ID"}, {{"Sum of avg", each List.Sum([#"Average Meeting/Day"]), type nullable number}, {"Actual Meeting", each List.Min([Actual Meeting]), type nullable number}}),
#"Added Conditional Column" = Table.AddColumn(#"Grouped Rows", "Audit", each if [Actual Meeting] = [Sum of avg] then "Included" else "Not Included")
in
#"Added Conditional Column"
Not sure this is going to help, but here goes. Note: not set up to work with duplicate values among the potential items to analyze. No idea if it will work with so many decimals, so that the sum of the decimals might be 0.0000001 off from total and thus not be recognized. Works for me with whole numbers
let
Process=(x as table) as table =>
// Bill Szysz 2017, all combinations of items in list, blows up with too many items to process due to large number of combinations
let ComboList=x[AMD],
find=x[AM]{0},
Source=Table.FromList(List.Transform(ComboList, each Text.From(_))),
AddIndex = Table.AddIndexColumn(Source, "Index", 0, 1),
ReverseIndeks = Table.AddIndexColumn(AddIndex, "RevIdx", Table.RowCount(AddIndex), -1),
Lists = Table.AddColumn(ReverseIndeks, "lists", each
List.Repeat(
List.Combine({
List.Repeat({[Column1]}, Number.Power(2,[RevIdx]-1)),
List.Repeat( {null}, Number.Power(2,[RevIdx]-1))
})
, Number.Power(2, [Index]))
),
ResultTable = Table.FromColumns(Lists[lists]),
#"Replaced Value" = Table.ReplaceValue(ResultTable,null,"0",Replacer.ReplaceValue,Table.ColumnNames(ResultTable )),
#"Added Index" = Table.AddIndexColumn(#"Replaced Value", "Index", 0, 1, Int64.Type),
totals = Table.AddColumn(#"Added Index", "Custom", each List.Sum(List.Transform(Record.ToList( Table.SelectColumns(#"Added Index",Table.ColumnNames(ResultTable )){[Index]}) , each Number.From(_)))),
#"Filtered Rows" = Table.SelectRows(totals, each ([Custom] = find)),
#"Kept First Rows" = Table.FirstN(#"Filtered Rows",1),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Kept First Rows", {"Custom", "Index"}, "Attribute", "Value"),
#"Filtered Rows1" = Table.SelectRows(#"Unpivoted Other Columns", each ([Value] <> "0")),
#"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows1",{"Value"}),
FoundList = Table.TransformColumnTypes(#"Removed Other Columns",{{"Value", type number}}),
#"Merged Queries" = Table.NestedJoin(x, {"AMD"}, FoundList, {"Value"}, "output1", JoinKind.LeftOuter),
#"Expanded output1" = Table.ExpandTableColumn(#"Merged Queries", "output1", {"Value"}, {"Value"}),
#"Calculated Absolute Value" = Table.TransformColumns(#"Expanded output1",{{"Value", each if _=null then "Not Included" else "Included", type text}})
in #"Calculated Absolute Value",
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Day", type text}, {"Employee_ID", Int64.Type}, {"AM", Int64.Type}, {"AMD", Int64.Type}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Day", "Employee_ID"}, {{"data", each _, type table [Day=nullable text, Employee_ID=nullable number, AM=nullable number, AMD=nullable number]}}),
#"Added Custom" = Table.AddColumn(#"Grouped Rows", "Custom", each Process([data])),
#"Removed Other Columns" = Table.SelectColumns(#"Added Custom",{"Custom"}),
#"Expanded Custom" = Table.ExpandTableColumn(#"Removed Other Columns", "Custom", {"Day", "Employee_ID", "AM", "AMD", "Value"}, {"Day", "Employee_ID", "AM", "AMD", "Value"})
in
#"Expanded Custom"
Related
I have a table for GL transactions for a whole period over the years. Now I need to develop a table in which I can see the opening balance and closing balance for each month.
Ex :
enter image description here
result I need is,
enter image description here
I tried "closingbalancemonth" function, but it gives the total of transactions done on the last day of the month as the closing balance.
this is not exactly what you want but at least it gives correct results.
Closing Balance
Closing Balance =
CALCULATE(sum(Transactions[GL Transactions]),
filter(all(Transactions),
EOMONTH(min(Transactions[date]),0)>Transactions[date]))
Opening Balance
Opening Balance =
CALCULATE(sum(Transactions[GL Transactions]),
filter(all(Transactions),
EOMONTH(min(Transactions[date]),-1)>Transactions[date]))
there are also two more calculated columns. one is for the report and one is for sorting..
for the report :
Year & Month = FORMAT(Transactions[date],"MMM - YY")
for sorting :
YearMonth = year(Transactions[date])&MONTH(Transactions[date])
see also the attached sample file
You can also try this which has been solved on the Power Query side...
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("bY3BCcAwDAN38TsRioObdJbg/dcoFAwq9GfuJPkcG7RmAww4/b1p2YqHiB4iNniVcG3cIvpna4Hr78kU7pJ3cBeflvkA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"GL account" = _t, date = _t, #"GL Transactions" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"date", type date}, {"GL Transactions", type number}}),
#"Sorted Rows" = Table.Sort(#"Changed Type",{{"date", Order.Ascending}}),
#"Added Custom1" = Table.AddColumn(#"Sorted Rows", "Eomonth", each Date.EndOfMonth([date])),
#"Grouped Rows" = Table.Group(#"Added Custom1", {"GL account", "Eomonth"}, {{"Sum GL Transactions", each List.Sum([GL Transactions]), type nullable number}}),
#"Added Custom" = Table.AddColumn(#"Grouped Rows", "Closing Balance", each Function.Invoke((current as date,tb as table)=>
List.Sum(Table.SelectRows(tb,each [Eomonth]>= List.Min(#"Grouped Rows"[Eomonth]) and [Eomonth]<=current)[Sum GL Transactions])
,{[Eomonth],#"Grouped Rows"})),
#"Added Custom2" = Table.AddColumn(#"Added Custom", "Opening Balance", each Function.Invoke((current as date,tb as table)=>
List.Sum(Table.SelectRows(tb,each [Eomonth]<current)[Sum GL Transactions])
,{[Eomonth],#"Grouped Rows"})),
#"Changed Type1" = Table.TransformColumnTypes(#"Added Custom2",{{"Closing Balance", type number}, {"Opening Balance", type number}, {"Eomonth", type date}}),
#"Removed Columns" = Table.RemoveColumns(#"Changed Type1",{"Sum GL Transactions"}),
#"Unpivoted Columns" = Table.UnpivotOtherColumns(#"Removed Columns", {"GL account", "Eomonth"}, "Attribute", "Value")
in
#"Unpivoted Columns"
--->
New Version
I want to transform a column in Power query. Only the transformation should be applied within the group based on a condition. This is my data.
Here in the above table, I just want to transform Office based column to all 1 if any Office-based column is set to 1 on the particular ID group. But all the Office based column value is 0 on the particular ID group, it should not transform the column.
My expected result would be,
It would be fine, If an additional column can have the transformed column.
try this
let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", type text}, {"Item", type text}, {"Home Based", Int64.Type}, {"Office Based", Int64.Type}, {"Amount", Int64.Type}}),
// find all IDs with 1 in Office Based
#"Filtered Rows" = Table.SelectRows(#"Changed Type", each ([Office Based] = 1)),
#"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows",{"ID"}),
#"Removed Duplicates" = Table.Distinct(#"Removed Other Columns"),
//merge that back in
#"Merged Queries" = Table.NestedJoin(#"Changed Type", {"ID"}, #"Removed Duplicates", {"ID"}, "Table2", JoinKind.LeftOuter),
#"Expanded Table2" = Table.ExpandTableColumn(#"Merged Queries", "Table2", {"ID"}, {"ID.1"}),
// if there was a match convert to 1 otherwise take original number
#"Added Custom" = Table.AddColumn(#"Expanded Table2", "OfficeBased2", each try if Text.Length([ID.1])>0 then 1 else [Office Based] otherwise [Office Based]),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Office Based", "ID.1"}),
#"Renamed Columns" = Table.RenameColumns(#"Removed Columns",{{"OfficeBased2", "OfficeBased"}})
in #"Renamed Columns"
or the more compact version:
let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Added Custom" = Table.AddColumn(Source,"Custom",(i)=>Table.SelectRows(Source, each [ID]=i[ID]) [Office Based]),
#"Added Custom1" = Table.AddColumn(#"Added Custom", "Office Based2", each if List.Contains([Custom],1) then 1 else [Office Based]),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom1",{"Custom", "Office Based"})
in #"Removed Columns"
The first method probably works best for large data sets
Here's another method:
Group by ID
Apply the Table.TransformColumns operation to each subtable
Then re-expand
let
Source = Excel.CurrentWorkbook(){[Name="Table2"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", type text}, {"Item", type text}, {"Home Based", Int64.Type}, {"Office Based", Int64.Type}, {"Amount", Int64.Type}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"ID"}, {
{"Xform", (t)=>Table.TransformColumns(t, {
{"Office Based", each if List.ContainsAny(t[Office Based],{1}) then 1 else 0}}) ,
type table
[ID=nullable text, Item=nullable text, Home Based=nullable number, Office Based=nullable number, Amount=nullable number]}
}),
#"Expanded Xform" = Table.ExpandTableColumn(#"Grouped Rows", "Xform",
{"Item", "Home Based", "Office Based", "Amount"},
{"Item", "Home Based", "Office Based", "Amount"})
in
#"Expanded Xform"
Consider:
I have four columns (A1, A2, A3 & A4) and I want to count the same/duplicate values in these four columns by grouping Index column.
For example, if "Index 1" has found the value in "A1" and the same value exists next to A2 column then it should remove it. If it’s not next to A1 column then it should stay. For example, "1 index" can take only one unique value in the four columns.
I would use Pivot and Unpivot to get this result.
Starting with a mock dataset:
I was able to transform it to this:
Here is the advanced query for your review:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("ZZDdCoMwDIXfpddezG3u51mKF9FJG2Zt0Anu7ZfUdWQUSs7hfIeU1lpTm8rASB5Yv6etrDmqWGNhJ5V1w0ujc4lyQ3BTxA5C+OGLCjCW/MrmCUSgdFozvbEZIXQPicP6x+5sNswjBuznOCknnfrAAQmffeS5oEtX75oa8lnkcX8wLe+0YXBM2w8=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, A1 = _t, A2 = _t, A3 = _t, A4 = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"A1", type text}, {"A2", type text}, {"A3", type text}, {"A4", type text}}),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Changed Type", {"ID"}, "Attribute", "Value"),
#"Filtered Rows" = Table.SelectRows(#"Unpivoted Other Columns", each [Value] <> null and [Value] <> ""),
#"Removed Duplicates" = Table.Distinct(#"Filtered Rows", {"Value", "ID"}),
#"Pivoted Column" = Table.Pivot(#"Removed Duplicates", List.Distinct(#"Removed Duplicates"[Attribute]), "Attribute", "Value"),
#"Replaced Value" = Table.ReplaceValue(#"Pivoted Column",null,"",Replacer.ReplaceValue,{"A1", "A2", "A3", "A4"})
in
#"Replaced Value"
The idea is that you will unpivot your columns so that it is possible to take advantage of 'remove duplicates' functionality. Then pivot the dataset back to its original form.
If you want to close the gaps by shifting values to the left, you will need to do a little more work. While the data is unpivoted, create a new index over the subgroups. This is an intermediate level task, Curbal has a good example (https://www.youtube.com/watch?v=7CqXdSEN2k4). Discard your 'Attribute' column from the unpivot and use your new subgroup index instead.
Here's the advanced editor for the shift:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("ZZDdCoMwDIXfpddezG3u51mKF9FJG2Zt0Anu7ZfUdWQUSs7hfIeU1lpTm8rASB5Yv6etrDmqWGNhJ5V1w0ujc4lyQ3BTxA5C+OGLCjCW/MrmCUSgdFozvbEZIXQPicP6x+5sNswjBuznOCknnfrAAQmffeS5oEtX75oa8lnkcX8wLe+0YXBM2w8=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, A1 = _t, A2 = _t, A3 = _t, A4 = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"A1", type text}, {"A2", type text}, {"A3", type text}, {"A4", type text}}),
#"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Changed Type", {"ID"}, "Attribute", "Value"),
#"Filtered Rows" = Table.SelectRows(#"Unpivoted Other Columns", each [Value] <> null and [Value] <> ""),
#"Removed Duplicates" = Table.Distinct(#"Filtered Rows", {"Value", "ID"}),
#"Grouped Rows" = Table.Group(#"Removed Duplicates", {"ID"}, {{"Data", each Table.AddIndexColumn(_, "AttributeRenumber", 1, 1), type table [ID=nullable number, Attribute=text, Value=text, AttributeRenumber=text]}}),
#"Removed Columns" = Table.RemoveColumns(#"Grouped Rows",{"ID"}),
#"Expanded Data" = Table.ExpandTableColumn(#"Removed Columns", "Data", {"ID", "Attribute", "Value", "AttributeRenumber"}, {"ID", "Attribute", "Value", "AttributeRenumber"}),
#"Add Prefix" = Table.TransformColumns(#"Expanded Data",{{"AttributeRenumber", each "A" & Number.ToText(_) , type text}}),
#"Removed Columns1" = Table.RemoveColumns(#"Add Prefix",{"Attribute"}),
#"Pivoted Column" = Table.Pivot(#"Removed Columns1", List.Distinct(#"Removed Columns1"[AttributeRenumber]), "AttributeRenumber", "Value"),
#"Replaced Value" = Table.ReplaceValue(#"Pivoted Column",null,"",Replacer.ReplaceValue,{"A1", "A2", "A3"})
in
#"Replaced Value"
Hi I'm trying to create a new table based on an existing table in power bi, how can I do this?
existing table
New table
New table group column can be exported from table3
Power Query solution
let
Source = Table.FromRows(
Json.Document(
Binary.Decompress(
Binary.FromText(
"i45WSlTSUUpMBhLJeToK+aW2KYlKsTrRSklAkZRMhHByKVg4GYvqWAA=",
BinaryEncoding.Base64
),
Compression.Deflate
)
),
let
_t = ((type nullable text) meta [Serialized.Text = true])
in
type table [user = _t, status = _t, dn = _t]
),
#"Changed Type" = Table.TransformColumnTypes(
Source,
{{"user", type text}, {"status", type text}, {"dn", type text}}
),
#"Replaced Value" = Table.ReplaceValue(
#"Changed Type",
"cn, ou=",
"",
Replacer.ReplaceText,
{"dn"}
),
#"Removed Columns" = Table.RemoveColumns(#"Replaced Value", {"user"}),
#"Removed Other Columns" = Table.SelectColumns(#"Removed Columns", {"dn"}),
#"Removed Duplicates" = Table.Distinct(#"Removed Other Columns"),
#"Added Index" = Table.AddIndexColumn(#"Removed Duplicates", "Index", 1, 1, Int64.Type),
#"Merged Queries" = Table.NestedJoin(
#"Removed Columns",
{"dn"},
#"Added Index",
{"dn"},
"Added Index",
JoinKind.LeftOuter
),
#"Expanded Added Index" = Table.ExpandTableColumn(
#"Merged Queries",
"Added Index",
{"Index"},
{"Index"}
),
#"Pivoted Column" = Table.Pivot(
#"Expanded Added Index",
List.Distinct(#"Expanded Added Index"[status]),
"status",
"Index",
List.Count
),
#"Added Custom" = Table.AddColumn(#"Pivoted Column", "Total", each [ac] + [di])
in
#"Added Custom"
Hi I am trying to create a Cummulative sum for "t_count" field in Power BI but everytime I try to run the this formula: List.Sum(List.Range(#"Added Index"[t_count],0,[Hire_Count])) I am getting error that t_count field was not found in the table. Can anyone help me in creating cummulative sum without getting this error? I am sharing full code from advance editor.
let
Source = Excel.Workbook(File.Contents("C:\Users\rabi.jaiswal\Desktop\hr_analytics\EMPLOYEE_ATTRITION_DATA.xlsx"), null, true),
Sheet1_Sheet = Source{[Item="Sheet1",Kind="Sheet"]}[Data],
#"Promoted Headers" = Table.PromoteHeaders(Sheet1_Sheet, [PromoteAllScalars=true]),
#"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"Emp_Id", Int64.Type}, {"Emp_Name", type text}, {"email", type text}, {"Join_Date", type date}, {"Joined_As", type text}, {"Department", type text}, {"Manger_Name", type text}, {"Current_Designation", type text}, {"Current_Designation_Start_Date", type date}, {"Work Hours", Int64.Type}, {"Standard_Work_Hours", Int64.Type}, {"Term_Date", type date}}),
#"Removed Columns" = Table.RemoveColumns(#"Changed Type",{"Emp_Id", "Emp_Name", "email", "Joined_As", "Department", "Manger_Name", "Current_Designation", "Current_Designation_Start_Date", "Work Hours", "Standard_Work_Hours"}),
#"Sorted Rows" = Table.Sort(#"Removed Columns",{{"Join_Date", Order.Ascending}}),
#"Added Index" = Table.AddIndexColumn(#"Sorted Rows", "Index", 1, 1, Int64.Type),
#"Renamed Columns" = Table.RenameColumns(#"Added Index",{{"Index", "Hire_Count"}}),
#"Added Conditional Column" = Table.AddColumn(#"Renamed Columns", "t_count", each if [Term_Date] = null then 0 else 1),
#"Added Custom" = Table.AddColumn(#"Added Conditional Column", "Term_total", each List.Sum(List.Range(#"Added Index"[t_count],0,[Hire_Count])))
in
#"Added Custom"
your index starts with one, so try
#"Added Custom" = Table.AddColumn(#"Added Conditional Column", "Term_total", each List.Sum(List.FirstN(#"Added Conditional Column"[t_count],[Index]))),
if your index started from zero, it would be
#"Added Custom" = Table.AddColumn(#"Added Conditional Column", "Term_total", each List.Sum(List.FirstN(#"Added Conditional Column"[t_count],[Index]+1))),