I have made table as follows (https://apexinsights.net/blog/convert-date-range-to-list):
In this scenario suppose I configure Incremental refresh on the Start Date column, then will Power BI support this correctly. I am asking because - say the refresh is for last 2 days or last 2 months, then it will fetch the source rows, and apply the transform to the partition. But my concern is that I will have to put the date param filter on Start date prior to non folding steps so that the query folds (alternatively power query will auto apply date filter so that query can fold).
So when it pulls the data based on start date and apply the transforms then I'm not able to think clearly about what kind of partitions it will create. Whether it is for Start date or for expabded date. Is query folding supported in this scenario?
This is a quite complicated scenario, where I would probably just avoid adding incremental refresh.
You would have to use the RangeStart/RangeEnd parameters twice in this query. Once that gets folded to the data source to retrieve ranges that overlap with the [RangeStart,RangeEnd) interval and a second time after expanding the ranges to filter out individual rows that fall outside [RangeStart,RangeEnd).
Related
When a query can be folded - that is - when the last step of the power query transformations shows the native SQL query, then Power BI will only pull specific columns from the SQL table.
If a query cannot be folded, and say one of the prior transformations is to remove all columns other than orderid, date, saleamt; then does Power BI fetch the entire table (all columns)?
PQ streams data. If folding cannot occur, then each row is streamed as required. You can learn everything you want to know from Ben Gribaudo: https://bengribaudo.com/blog/2020/08/26/5417/how-power-query-thinks
PQ may also reorder your steps to optimise folding.
THis is an example of what I think i need to do
I would like to ask some modeling advise I cannot solve myself:
I am using Power BI to visualize the time machinery is out of order.
The source is a register of equipment not functioning, with a start date and end date (note that there is no end date if the machine is not fixed yet).
I would like to show the time (hours, percentage, etc) that the machinery is out of order, filter for a specific period /date (e.g. month).
So I have 2 date columns: ‘’Start out of order’’ and ‘’Back in order’’
I do have a date table, which I usually would connect to all the date variables. However, since I am working with a Start and End date. This does not give the result I am looking for.
Any help is very much appreciated!
Kind regards,
Link to my Power BI FILE:
https://wetransfer.com/downloads/83ca3850392967d0d42a5cc71f4352c420200213160932/eb7353
Stijn
I am not sure how you would like to visualise your data, but this is what I managed to do:
create a daysdiff column with
Daysbetween = IF(ISBLANK(TF_Eventos
[End out of order]);DATEDIFF(TF_Eventos[Start out of
order];TF_Eventos[TODAY];HOUR);DATEDIFF(TF_Eventos
[Start out of order];TF_Eventos[End out of order];HOUR))
This creates your column to check difference between Dates.
Then create a separate column with your Date. In this case I copied the Start out of order date, since I thought you might wanted to be able to filter for the start dates. Then simply create a relationship between your newly created Date column and your start out of order date.
Doing so lets you create a visual with the daysbetween (in this case portrayed in hours) and your start dates. Now just simply add a slicer and you can filter on date.
Hope this helps
I am currently working on an ETL pipeline that uses BigQuery to store staging data, and then uses Dataprep to transform the data and store it in new BigQuery tables for production.
We have been experiencing issues finding the most cost effective way to apply these transforms on a small selection of the data, typically only the last X number of days from the current max date in the staging data table. For example, we need to calculate the max available date in the staging data, and then retrieve all rows within the past 3 days from this date. Unfortunately we can't rely on the 'max date' in the staging data always being up to date (this data is brought in from third party APIs of varying quality and reliability).
At first I tried applying these transforms directly in Dataprep by getting the max date, creating a comparison column using DATEDIFF and then discarding rows more than 3 days older than this 'max date'. This proved to be very time consuming and inefficient in terms of cost.
The next thing we tried was to filter down the data in BigQuery views, which would then be used as the initial datasets for the Dataprep flows (the data would be pre-filtered before Dataprep applies any transforms). We first tried doing this dynamically in BigQuery, like so:
WITH latest_partitiontime AS (SELECT _PARTITIONTIME as pt FROM
`{project}.{dataset}.{table}`
GROUP BY _PARTITIONTIME
ORDER BY _PARTITIONTIME DESC
LIMIT 1)
SELECT {columns}
FROM `{project}.{dataset}.{table}`
WHERE _PARTITIONTIME >= (SELECT pt FROM latest_partitiontime)
But upon preview of the GB/estimated cost of the query, it seems very inefficient and expensive.
The next thing we tried was hard coding the date, which for some reason is a lot cheaper/quicker:
SELECT {columns}
FROM `{project}.{dataset}.{table}`
WHERE _PARTITIONTIME >= '2018-08-08'
So our current plan is to maintain a view for each table, and update the hard coded date in the view SQL via the Python SDK each time the staging data successfully completes (https://cloud.google.com/bigquery/docs/managing-views).
It feels like we are potentially missing a much easier/more efficient solution to this problem. So I wanted to ask:
Is it more cost effective carrying out this initial filtering by date in Dataprep or in BigQuery?
What is the most cost effective way of filtering the data in the chosen product?
Are you familiar with the MERGE statement of standard SQL and the clustering feature released? that could actually merge your data and you can further customize it to read only some partitions.
Example from manual:
MERGE dataset.DetailedInventory T
USING dataset.Inventory S
ON T.product = S.product
WHEN NOT MATCHED AND quantity < 20 THEN
INSERT(product, quantity, supply_constrained, comments)
VALUES(product, quantity, true, ARRAY<STRUCT<created DATE, comment STRING>>[(DATE('2016-01-01'), 'comment1')])
WHEN NOT MATCHED THEN
INSERT(product, quantity, supply_constrained)
VALUES(product, quantity, false)
hint: you can partition by null, and leverage only the 'clustering level'
I have 40 million rows in my dataset. Each day I may get an extra 100 rows. Obviously I don't want to have to import the whole 40 million each time I do a data refresh. Is it possible to do an incremental refresh where only the new rows are added?
I don't think incremental update as you describe it is possible yet.
It looks like you can push rows with Power BI REST API, if you're happy to switch to that.
However, you might find this workaround useful:
Split your table and query into two: where date <= 'somedate' and where date >'somedate'
Add an "empty query", use Table.Combine to join your two subtables. Use this as your main table.
Whenever you need to refresh, only refresh the second query (the one with where date >'somedate').
Every once in a while, when that second query starts taking a long time, change somedate to the current date and do a full refresh.
The feature has now been implemented and is called Incremental refresh. Currently it is a premium only feature.
i'm trying the new Power BI (Desktop) to create a barchart that shows me the duration in days for the delivery of an order.
I have 2 files. 1 with the delivery data (date, barcode) and another file with the deliverystatusses (date, barcode).
I Created a relation in the powerBI relations tab on the left side to create a relation on barcode. 1 Delivery to many DeliveryStatusses.
Now I want to add a column/measure to calculate the number of days before a package is delivered. I searched a few blogs but with no succes.
The function DATEDIFF is only recognized in a measure, and measures seem to work on table date, not rowdata. So adding a column using the DATEDIFF function doesn't work.
Adding a column using a formula :
Duration = [DeliveryDate] - Delivery[OrderDate]
results in an error that the right side is a list (It seems the relationship isn't in place)?
What am I doing wrong?
You might try doing this in the Query window instead since I think each barcode has just one delivery date and one delivery status. You could merge the two queries into a single table. Then you wouldn't need to worry about the relationships... If on the other hand you can have multiple lines for each delivery in the delivery status table, then you need to get more fancy. If you're only interested in the last status (as opposed to the history of status) you could again use the Query windows to group the data. If you need the full flexibility, you'd probably need to create a Measure that expresses the logic you want.
The RELATED keyword is used to reference another table. Update your query as follows and it should work.
Like this:
Duration = [DeliveryDate] - RELATED(Delivery[OrderDate])