I have created a data flow.
Then created a dataset that uses the dataflow as data source.
Then I tried editing the dataset to a point to a different dataflow, however there isnt an option to change to a different dataflow.
How to modify a dataset's dataflow in Power BI?
The M code genderated when loading data from a Power BI Dataflow looks like this:
let
Source = PowerBI.Dataflows(null),
#"Long-Workspace-Id" = Source{[workspaceId="Long-Workspace-Id"]}[Data],
#"Long-Dataflow-Id" = #"Long-Workspace-Id"{[dataflowId="Long-Dataflow-Id"]}[Data],
#"Sample Data1" = #"Long-Dataflow-Id"{[entity="Sample Data"]}[Data]
in
#"Sample Data1"
With ID values that look like this fake ID string: 68920386-482d-445e-9725-839065646dde.
So, you just need to figure out the ID values for the dataflow you do want to connect to and swap them out.
One way to find these IDs would be to connect to the other dataflow in another Power BI instance and copy the values from the query editor in that one over to the original Power BI instance.
The IDs are also contained in the URL when you navigate to the dataflow within the workspace that it lives in. The URL looks like
https://app.powerbi.com/groups/long-workspace-id/dataflows/long-dataflow-id
Related
I have a PBI dataset which I created using PBI Power Query on my local laptop.
The PBI dataset modeled the gross data into fact the dim tables.
Let's say that the source data is an invoice table, the power query break the table to fact and dim table, create the connections etc. Then I create a PBI report to visualize this data.
I am looking for a way to create the data model automatically.
As I picture it it will be a JSON file which include the data source parameters (details , credential and other parameters) and a script which include the data model logic as I created on Power Query).
When I get a new invoice table from another data source all I need to do is:
update the JSON file and run the script which will setup the data model automatically.
for example: take the invoice table, create the fact and dim tables ad create the connections.
N problem using other Azure solution such as Azure analytical service etc.
I have the coding skills, so fee free to suggest API all everything.
Thank you, Tal
I am a newbie to Power BI, I am trying to learn how to create a workflow. As a first step, I created an ADLS Gen2 container and uploaded a sample json to the container. Json has only one key/value. Something like
{ "DOMINO_API_KEY": "val1"}
I am able to establish the connection to my container through Power BI, transform the JSON to a table like in the below screenshot
Then I clicked next to create the Map table. It shows me to enter the table name etc
When I created the table, I do not see the value in the table which I transformed earlier. What am I doing wrong here, can anyone help?
Thanks,
Tintu
I have a power BI dataset that takes its data from a software made by the IT team in my organization.
I was wondering if it was possible for me to "freeze" all the data in the PBI dataset (like, taking a picture of the data for exemple today) and use this dataset for further analysis (I have another power BI file linked to that Power BI dataset). I know the data won't refresh, but it's not important for what I need to do, as I only need to have the past info.
The reason why I need to know if that's possible is that I'm going oversea for one month and won't have access to the original dataset. Downloading all the data into one excel is impossible as it is way to big.
thanks
It sounds like you're after some sort of snapshotting functionality
If you just wanted to keep the file as is, then you can download the pbix and just not refresh it provided its in import mode.
However one approach you could take if you want to continue doing development without worrying about accidentally refreshing is to use a power bi dataflow
You could copy your power query queries to a dataflow. Refresh them all as at today. Then don't refresh the dataflow anymore
You can then point your power bi dataset to your dataflow
https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-create
That way if you wanted to do further transformation of data, you wouldnt be getting new data from the data source (so long as you dont refresh the dataflow)
Currently we have a problem with loading data when updating the report data with respect to the DB, since it has too many records and it takes forever to load all the data. The issue is how can I load only the data from the last year to avoid taking so long to load everything. As I see, trying to connect to the COSMO DB in the box allows me to place an SQL query, but I don't know how to do it in this type of non-relational database.
Example
Power BI has an incremental refresh feature. You should be able to refresh the current year only.
If that still doesn’t meet expectations I would look at a preview feature called Azure Synapse Link which automatically pulls all Cosmos DB updates out into analytical storage you can query much faster in Azure Synapse Analytics in order to refresh Power BI faster.
Depending on the volume of the data you will hit a number of issues. First is you may exceed your RU limit, slowing down the extraction of the data from CosmosDB. The second issue will be the transforming of the data from JSON format to a structured format.
I would try to write a query to specify the fields and items that you need. That will reduce the time of processing and getting the data.
For SQL queries it will be some thing like
SELECT * FROM c WHERE c.partitionEntity = 'guid'
For more information on the CosmosDB SQL API syntax please see here to get you started.
You can use the query window in Azure to run the SQL commands, or Azure Storage Explorer to test the query, then move it to Power BI.
What is highly recommended is to extract the data into a place where is can be transformed into a strcutured format like a table or csv file.
For example use Azure Databricks to extract, then turn the JSON format into a table formatted object.
You do have the option of using running Databricks notebook queries in CosmosDB, or Azure DataBricks in its own instance. One other option would to use change feed to send the data and an Azure Function to send and shred the data to Blob Storage and query it from there, using Power BI, DataBricks, Azure SQL Database etc.
In the Source of your Query, you can make a select based on the CosmosDB _ts system property, like:
Query ="SELECT * FROM XYZ AS t WHERE t._ts > 1609455599"
In this case, 1609455599 is the timestamp which corresponds to 31.12.2020, 23:59:59. So, only data from 2021 will be selected.
I am new to Power BI and Stream Analytics job.
I am trying to get the latest/recent value in Power BI based on timestamp. I Pushed the data from Stream Analytics job to Power BI dataset. I am receiving all the data successfully.
Now for one particular instance I want to show latest value from Stream Analytics job. I used simple matrix/Table to display the data. It display the data correctly. However when I try to use the Top N with N=1 it returns null. The same thing used to work correctly when I pulled the data from Azure Tables.
I just want to display the recent data (single value) from Steam to Power BI. What can I do to resolve this issue?
Regards,
AJ