Can we create pipelines to run DAX queries on Azure Analysis Service Tabular Models from ADF or Synapse Analytics? - powerbi

How can we create ADF pipeline to run DAX query from ADF(or Synapse Analytics) to AAS Tabular Models and get the data stored into tables in Azure Data Warehouse Tables or in a .csv file?
I've read about creating a .Net library for connecting to Analysis Services servers and querying data from .NET code. Is there any other approach?

You can create a linked server mapping to aas on the sql server.
Create a linked service in adf to the sql database and query the aas via the sql database.
https://datasharkx.wordpress.com/2021/03/16/copy-data-from-ssas-aas-through-azure-data-factory

Related

Create PowerBI Datamart from Azure Analysis Service

I am trying to create PowerBI Datamart from Azure Analyis service. There is a datamodel available in the Azure Analysis Service and I can connect using URL and Database Name. The datamodel has ~100 tables present in it and relationship also setup. So my question is, if I want to create a PowerBI datamart from the Azure Analyis service datamode, I need to do the Get Data option of PowerBI datamart and connect to Azure Analyis service, select table, select fields 100 time for getting all the tables of Azure Analyis service datamode into my PowerBI datamart? Is there any import function available where I can import all the tables in a single time?
Why do you want to copy data from AAS into a database?
The reason you find it difficult is that it's an odd thing to do. The query designer for AAS/SSAS generates MDX queries which are indented to run aggregate queries that return a handful of rows, and are wholly unsuitable for extracting whole tables. If you try, the queries will just run forever and fail.
It's possible to extract data from AAS/SSAS tabular models, but you must use DAX not MDX, and so you need to the Power Query or "Transform Data" window, and use the advanced editor.
Each query to load a table should look like this, eg to load the 'Customer' table:
let
Dax = "evaluate Customer",
Source = AnalysisServices.Database("asazure://southcentralus.asazure.windows.net/myserver", "mydatabase", [Query=Dax])
in
Source

Scalable Power BI data model on delta lake ADLS Gen2

Our current architecture for reporting and dashboarding is similar to the following:
[Sql Azure] <-> [Azure Analysis Services (AAS)] <-> [Power BI]
We have almost 30 Power BI Pro Licenses (no Premium Tier)
As we migrate our on-premise data feeds to ADLS Gen2 with Data Factory and Databricks (in the long run, we will dismiss SQL Azure DBs), we investigate how to connect Power BI to the delta tables.
Several approaches suggest using SQL Databricks endpoints for this purpose:
https://www.youtube.com/watch?v=lkI4QZ6FKbI&t=604s
IMHO this is nice as long as you have a few reports. What if you have, say, 20-30? Is there a middle layer between ADLS Gen2 delta tables and Power BI for a scalable and efficient tabular model? How to define measures, calculated tables, manage relationships efficiently without the hassle of doing this from scratch in every single .pbix?
[ADLS Gen2] <-> [?] <-> [Power BI]
As far as I can tell, no AAS Direct Query is allowed in this scenario:
https://learn.microsoft.com/en-us/azure/analysis-services/analysis-services-datasource
Is there a workaround to avoid the use of Azure Synapse Analytics? We are not using it, and I am afraid we will not include it in the roadmap.
Thanks in advance for your invaluable piece of advice
Is there a middle layer between ADLS Gen2 delta tables and Power BI for a scalable and efficient tabular model?
If you want to build Power BI Import Models from Delta tables without routing through Databricks SQL or Spark, you can look into the new Delta Sharing Connector for Power BI. Or run a Spark job to export the model data to a Data Lake format that Power BI/AAS can read directly.
If you want DirectQuery models, Synapse SQL Pool or Synapse Serverless would be the path, as these expose the data as SQL Server endpoints, for which Power BI and AAS support DirectQuery.
How to define measures, calculated tables, manage relationships efficiently without the hassle of doing this from scratch in every single .pbix?
Define them in an AAS Tabular Model or a Power BI Shared Data Set.

Possibility of Migration from Sisense to Microsoft Power BI

We are using Sisense for our reporting tool.
We have too many clients using Sisense.
This clients have a lot many dashboard , widget.
Sisense store data in mongo db.
I don't have an idea about Microsoft power BI.
Is there any possibility to build migrate tool for Sisense to Microsoft Power BI ?
Thank you.
For Sisense, it stores its meta-data for it in a mongo db instance. However for Power BI it stores it's meta data in the PBIX file. If you change the file extension from pbix to zip, you can navigate in inspect the contents.
When the report is deployed to the Power BI Service, it uses a number of components to store the file and meta data, blob storage and a small SQL instance in the background. You cannot access these items or the data in them.
For on premise versions of Power BI, Power BI Report Server (available in Premium only, or some enterprise licensing), this requires a SQL Server database to be used. This acts as a meta data store for the Power BI Front end and, also stores the files etc for the reports loaded to it. You can access this data meta store. More details on the setup here.
I don't think there is a path to migrate data from the mongo db to the sql, or the service, or the files, it will be a full recreate of the objects from one reporting technology to the other.
Actually, PowerBI uses XML and SiSense uses JAQL; parse the JAQL to create a translator to build rudimentary PowerBI reports. Since SiSense uses Elasticubes, Dashboards and Widgets, you have to parse them all to build out PowerBI. I successfully did this for SSRS and powerBI has a more complex layout but nonetheless, can be done...
Built it in .Net using Newtonsoft to extracts the JAQL (JSON) and then parse to translate to PowerBI..
not that hard

Best way to store text values to access with PowerBI?

I need to store two text values (user's e-mail and job title) in Azure Blob Storage to feed a dashboard in powerBI.
Is this possible in Azure Blob Storage? Am I better off using another datasource?
I also want to be able to write values to the storage in my Java application.
You can not directly store the text value to blob storage, but you can keep the text value in CSV file and then you can upload the file to the blob storage.
you can access the CSV file in PowerBI by importing the CSV file in PowerBI using Get Data Option in PowerBI.
Other way is you can insert the text value directly to COSMOS DB and then import that COSMOS DB using the Get Data in Power BI also.
Note: Java - Microsoft Azure Storage Client SDK for Java on Maven. This Azure Storage SDK has the ability to connect to Azure Cosmos DB accounts using the Table API.
Azure Cosmos DB Table API accounts use this format:
http://<storage account>.table.cosmosdb.azure.com/<table>
This is how you can connect to PowerBI using blob storage and Cosmos DB.
Follow the below reference for how to feed the value in PowerBI dashboard.
https://www.sqlshack.com/how-to-access-data-from-azure-blob-storage-using-power-bi/

How to read data from a POSTGRESQL database using DAS?

We are working on the ETL. How to read data from the POSTGRESQL data base using streams in DATA ANALYTICS SERVER and manipulate some operations using the streams and insert the manipulated data into another POSTGRESQL data base on a scheduled time. Please share the procedures to follow.
Actually, you don't need to publish data from your PostgreSQL server. Using WSO2 Data Analytics Server (DAS) you can pull data from your database and do the analysis. Finally, you can push results back to the PostgreSQL server. In DAS, we have a special connector called "CarbonJDBC" and using that connector you can easily do this.
The current version of the "CarbonJDBC" connector supports following database management systems.
MySQL
H2
MS SQL
DB2
PostgreSQL
Oracle
You can use following query to pull data from your PostgreSQL database and populate a spark table. Once spark table is populated with data, you can start you data analysis tasks.
create temporary table <temp_table> using CarbonJDBC options (dataSource "<datasource name>", tableName "<table name>");
select * from <temp_table>;
insert into / overwrite table <temp_table> <some select statement>;
For more information regarding "CarbonJDBC" connector please refer following blog post [1].
[1]. https://pythagoreanscript.wordpress.com/2015/08/11/using-the-carbon-spark-jdbc-connector-for-wso2-das-part-1/