Dynamic Dates in PowerBI Time Tables - powerbi

In my PowerBI, I have a table including several tasks and their start and end dates and duration for a product launch project. I also have a product launch date. All tasks should be done before the launch date.
My question is whether there is a way to show how the start and end dates of my tasks will change if the launch date changes. The order of tasks and the duration of each task remain the same. I want users to decide when the product launch date should be.
I would appreciate it if someone could help me with a solution. TY

Related

Powerbi ability to create snapshot data from a historical table of data

I have a sql database linked where I have the complete history of products and users. I want to the user to be able to select on the slicer a year and the data automatically shows active prodcuts, expired products and new products added in that year (or snapshot).
Is there a way this can be done? I am not able to find a measure to best do this for me.
I recommend creating a date dimension table first - I usually call mine Calendar. Please read this useful post by Radacad which will show you how to create one > https://radacad.com/power-bi-date-or-calendar-table-best-method-dax-or-power-query
Once it's done create relationships between your fact tables and calendar table on key dates of when your products are active or expired - I'm making a huge assumption that's what your tables store.
Your calendar table will then act as a single time/date point of truth and should be used to slice and dice your fact table.
Hope this helps!

Automatic job to delete bigquery table records

Is there a way to schedule deletion of rows from bigquery table based on a column condition? Something like a job to schedule to run every day.
For example, let's say I've a column called creation_date in the table. I need to delete records when creation_date is less than current date minus one week (creation_date < current date - 7). I need the job to run everyday on a specified time and delete records based on the creation date condition.
If there aren't any built in scheduler operations, could you suggest any options available?
You have a couple simple options within BigQuery itself you can utilize.
The simplest is likely scheduled queries. This will simply just execute a command on a schedule. You can execute a DELETE statement or some other method.
Additionally you could set table or partition expirations. This one involves a little more legwork but would achieve a similar result. Based on your description it would likely be a partition expiration you would want to set up.

Identifying unique texts

I'm making a Utilization file for our team.
I'm having a bit of difficulty in identifying what kind of workflow that the agent did that day.
I need to identify first the workflow done by that agent for a specific day because each workflow has a different AHT (average handling time) for the computation of their capacity for that day.
I have this file where
Column A = agent's name
Column B = date
Column C = workflow
is there a way to identify the workflows that the agent did that day?
note: there are agents that are working with different workflows each day.
Here's a sample of what I was trying to do.
Sample 2
try:
=IF((I2="")*(I3=""),,UNIQUE(IFERROR(FILTER(D2:D, B2:B=I2, C2:C=I3), "no data")))
spreadsheet demo

Timezone related issues in BigQuery (for partitioning and query)

We have a campaign management system. We create and run campaigns on various channels. When user clicks/accesses any of the Adv (as part of campaign), system generates a log. Our system is hosted in GCP. Using ‘Exports’ feature logs are exported to BigQuery
In BigQuery the Log Table is partitioned using ‘timestamp’ field (time when log is generated). We understand that BigQuery stores dates in UTC timezone and so partitions are also based on UTC time
Using this Log Table, We need to generate Reports per day. Reports can be like number of impressions per each day per campaign. And we need to show these reports as per ETC time.
Because the BigQuery table is partitioned by UTC timezone, query for ETC day would potentially need to scan multiple partitions. Had any one addressed this issue or have suggestions to optimise the storage and query so that its takes complete advantage of BigQuery partition feature
We are planning to use GCP Data studio for Reports.
BigQuery should be smart enough to filter for the correct timezones when dealing with partitions.
For example:
SELECT MIN(datehour) time_start, MAX(datehour) time_end, ANY_VALUE(title) title
FROM `fh-bigquery.wikipedia_v3.pageviews_2018` a
WHERE DATE(datehour) = '2018-01-03'
5.0s elapsed, 4.56 GB processed
For this query we processed the 4.56GB in the 2018-01-03 partition. What if we want to adjust for a day in the US? Let's add this in the WHERE clause:
WHERE DATE(datehour, "America/Los_Angeles") = '2018-01-03'
4.4s elapsed, 9.04 GB processed
Now this query is automatically scanning 2 partitions, as it needs to go across days. For me this is good enough, as BigQuery is able to automatically figure this out.
But what if you wanted to permanently optimize for one timezone? You could create a generated, shifted DATE column - and use that one to PARTITION for.

Power BI report help finilizing

enter code hereI am trying to make equipments availability report from 3 tables linked together by specific date, say for example (03/03/2019).
That date will return the jobs that we did in some equipments. When run the report I got only the equipments that have jobs on that day say I have 10 machines and I did only 3 jobs on that day so my result report will have only 3 rows I want my reporThe image show samplet to have 10 rows with only 3 rows with information about the jobs.
On that day but I need my report to show all the equipments even those has no job on that day. In Access it is easy by editing the relationship properties and chose keep all records from first table.