How to create table based on minimum date from other table in DAX? - powerbi

I want to create a second table from the first table using filters with dates and other variables as follows. How can I create this?
Following is the expected table and original table,

Go to Edit Queries. Lets say our base table is named RawData. Add a blank query and use this expression to copy your RawData table:
=RawData
The new table will be RawDataGrouped. Now select the new table and go to Home > Group By and use the following settings:
The result will be the following table. Note that I didnt use the exactly values you used to keep this sample at a miminum effort:
You also can now create a relationship between this two tables (by the Index column) to use cross filtering between them.
You could show the grouped data and use the relationship to display the RawDate in a subreport (or custom tooltip) for example.

I assume you are looking for a calculated table. Below is the workaround for the same,
In Query Editor you can create a duplicate table of the existing (Original) table and select the Date Filter -> Is Earliest option by clicking right corner of the Date column in new duplicate table. Now your table should contain only the rows which are having minimum date for the column.
Note: This table is dynamic and will give subsequent results based on data changes in the original table, but you to have refresh both the table.
Original Table:
Desired Table:
When I have added new column into it, post to refreshing dataset I have got below result (This implies, it is doing recalculation based on each data change in the original source)
New data entry:
Output:

Related

Reference variable tables using dynamic table names - Power BI, DAX

I have a measure whereby I calculate some data based on a slicer value. The slicer determines which table I get my data from e.g. SUM(TableA[sales]) or SUM(TableB[sales]). (Each of my tables is of the same structure).
I'd like to be able to set up a variable to dynamically change (with my slicer) which table I'm referring to.
Right now I have:
// Set __table to a stored string representation of the name of the database
// This seems to work
VAR __table = SELECTEDVALUE( AllTables[TableNames] )
// Evaluate the sales column of my variable table
// This doesn't work due to 'Cannot find name sales'
CALCULATE(SUM(__table[sales]))
I guess what I'm after is a way to have the slicer change the referenced table in my measures (with all of the columns being the same for each table).
I'm new to Power BI and I've been struggling with this for several days now. I'd really appreciate any help with either the method I'm attempting or any alternatives.
If you have small amount of data in each table, you can merge them all together as they are in same structure. Just add an additional column with reference of the Table name. Now you can create your Slicer using the new column and it will hold all table names in the list. Now, if you change table name in the slicer, your visuals will show results based on data from that respected table.

Big Query - Convert a int column into float

I would like to convert a column called lauder from int to float in Big Query. My table is called historical. I have been able to use this SQL query
SELECT *, CAST(lauder as float64) as temp
FROM sandbox.dailydev.historical
The query works but the changes are not saved into the table. What should I do?
If you use SELECT * you will scan the whole table and thus will be the cost. If table is small this shouldn't be a problem, but if it is big enough to be concern about cost - below is another approach:
apply ALTER TABLE ADD COLUMN to add new column of needed data type
apply UPDATE for new column
UPDATE table
SET new_column = CAST(old_column as float64)
WHERE true
Do you want to save them in a temporary table to use it later?
You can save it to a temporary table like below and then refer to "temp"
with temp as
( SELECT *, CAST(lauder as float64)
FROM sandbox.dailydev.historical)
You can not change a columns data type in a table
https://cloud.google.com/bigquery/docs/manually-changing-schemas#changing_a_columns_data_type
What you can do is either:
Create a view to sit on top and handle the data type conversion
Create a new column and set the data type to float64 and insert values into it
Overwrite the table
Options 2 and 3 are outlined well including pros and cons in the link I shared above.
Your statement is correct. But tables columns in Big Query are immutable. You need to run your query and save results to a new table with the modified column.
Click "More" > "Query settings", and in "Destination" select "Set a destination table for query results" and fill the table name. You can even select if you want to overwrite the existing table with generated one.
After these settings are set, just "Run" your query as usual.
You can use CREATE or REPLACE TABLE to write the structural changes along with data into the same table:
CREATE OR REPLACE TABLE sandbox.dailydev.historical
AS SELECT *, CAST(lauder as float64) as temp FROM sandbox.dailydev.historical;
In this example, historical table will be restructured with an additional column temp.
In some cases you can change column types:
CREATE TABLE mydataset.mytable(c1 INT64);
ALTER TABLE mydataset.mytable ALTER COLUMN c1 SET DATA TYPE NUMERIC;
Check conversion rules.
And google docs.

Power BI / Power Query - M language - playing with data inside group table

Hello M language masters!
I have a question about working with grouped rows when the Power Query creates a table with data. But maybe it is better to start from the beginning.
Important information! I will be asking for example only about adding an index. I know that there are different possibilities to reach such a result. But for this question, I need an answer about the possibility to work on tables. I want to use this answer in different actions (e.g table sorting, adding columns in group table).
In my sample data source, I have a list of fake transactions. I want to add an index for each Salesman, to count operations for each of them.
Sample Data
So I just added this file as a data source in Power BI. In Power query, I have grouped rows according to name. This step created form me column contained a table for each Salesman, which stores all his or her operations.
Grouping result
And now, I want to add an index column in each table. I know, that this is possible by adding a new column to the main table, which will be store a new table with added index:
Custom column function
And in each table, I have Indexed. That is good. But I have an extra column now (one with the table without index, and one with a table with index).
Result - a little bit messy
So I want to ask if there is any possibility to add such an index directly to the table in column Operations, without creating the additional column. My approach seems to be a bit messy and I want to find something cleaner. Does anyone know a smart solution for that?
Thank you in advance.
Artur
Sure, you may do it inside Table.Group function:
= Table.Group(Source, {"Salesman"}, {"Operations", each Table.AddIndexColumn(_, "i", 1, 1)})
P.S. To add existing index column to nested table use this code:
= Table.ReplaceValue(PreviousStep,each [index],0,(a,b,c)=>Table.AddColumn(a,"index", each b),{"Operations"})

Comparing 2 Tables in PowerBI

Working on a way to compare 2 tables in PowerBI.
I'm joining the 2 tables using the primary key and making custom columns that compare if the old and new are equal.
This doesn't seem like the most efficient way of doing things, and I can't even color code the matrix because some values aren't integers.
Any suggestions?
I did a big project like this last year, comparing two versions of a data warehouse (SQL database).
I tackled most of it in the Query Editor (actually using Power Query for Excel, but that's the same as PBI's Query Editor).
My key technique was to first create a Query for each table, and use Unpivot Other Columns on everything apart from the Primary Key columns. This transforms it into rows of Attribute, Value. You can filter Attribute to just the columns you want to compare.
Then in a new Query you can Merge & Expand the "old" and "new" Queries, joining on on the Primary Key columns + the Attribute column. Then add Filter or Add Column steps to get to your final output.

How to add external data as new columns in Power Query?

Today is my first day to use PowerBI 2.0 Desktop.
Is there any way to add new columns from external data into the existing table in my PowerBI?
Or is there anyway to add new columns from another table in PowerBI?
It seems that, in PowerQuery, all the tabs Add Custom Column, Add Index Column and Duplicate Column are all using the existing columns in the same table.....
You can use Merge Queries to join together two queries, which will let you bring in the other table's columns.
Also, Add Custom Column accepts an arbitrary expression, so you can reference other tables in that expression. For example, if Table1 and Table2 had the same number of rows, I could copy over Table2's column by doing the following:
Add an Index Column. Let's call it Index.
Add a Custom Column with the following expression: Table2[ColumnName]{[Index]}