PowerBI Query contains transformations that can't be used for DirectQuery - powerbi

I am using PowerBI Desktop (2.96.1061.0) to connect to a local MS SQL server so I can prepare some visualizations. It is important to mention that all data connections (Tables, SQL queries) are using the DirectQuery option.
It's been quite a smooth experience so far. No issues at all. Now I am trying to get some new data, again, through a direct SQL query:
SELECT BillId, string_agg(PGroupName, ', ')
FROM
(SELECT bm.ImportedBillsId as BillId, pg.Name as PGroupName
FROM [BillMp] bm
JOIN [Mps] m on bm.ImportersId = m.Id
JOIN [PGroups] pg on m.PoliticalGroupId = pg.Id
GROUP BY bm.ImportedBillsId, pg.Name) t
GROUP BY BillId
but for some reason, it is not letting me re-create the model and apply the new changes. No matter that the import wizard is able to visualize the actual data prior to the update. This is the error that I am getting:
I have also tried to import only the data from the internal/nested query
SELECT bm.ImportedBillsId as BillId, pg.Name as PGroupName
FROM [BillMp] bm
JOIN [Mps] m on bm.ImportersId = m.Id
JOIN [PGroups] pg on m.PoliticalGroupId = pg.Id
GROUP BY bm.ImportedBillsId, pg.Name
and process (according to this article) the other/outer query through PowerBI but I am still getting the same error.

Related

Transaction Management with Raw SQL and Models in a single transaction Django 1.11.49

I have an API which reads from two main tables Table A and Table B.
Table A has a column which acts as foreign key to Table B entries.
Now inside api flow, I have a method which runs below logic.
Raw SQL -> Joining table A with some other tables and fetching entries which has an active status in Table A.
From result of previous query we take the values from Table A column and fetch related rows from Table B using Django Models.
It is like
query = "Select * from A where status = 1" #Very simplified query just for example
cursor = db.connection.cursor()
cursor.execute(query)
results = cursor.fetchAll()
list_of_values = get_values_for_table_B(results)
b_records = list(B.objects.filter(values__in=list_of_values))
Now there is a background process which will enter or update new data in Table A and Table B. That process is doing everything using models and utilizing
with transaction.atomic():
do_update_entries()
However, the update is not just updating old row. It is like deleting old row and deleting related rows in Table B and then new rows are added to both tables.
Now the problem is if I run api and background job separately then everything is good, but when both are ran simultaneously then for many api calls the second query of Table B fails to get any data because the transaction executed in below manner:
Table A RAW Transaction executes and read old data
Background Job runs in a single txn and delete old data and enter new data. Having different foreign key values that relates it to Table B.
Table B Models read query executes which refers to values already deleted by previous txn, hence no records
So, for reading everything in a single txn I have tried below options
with transaction.atomic():
# Raw SQL for Table A
# Models query for Table B
This didn't worked and I am still getting same issue.
I tried another way around
transaction.set_autocommit(False)
Raw SQl for Table A
Models query for Table B
transaction.commit()
transaction.set_autocommit(True)
But this didn't work either. How can I read both queries in a single transaction so background job updates should not affect this read process.

Unable to connect snowflake query to power bi - Syntax

I have this query in snowflake. The query works fine in snowflake, but when i am trying to connect it to Power Bi, I get the Native error query. The error usually pops up when there's a syntax error. I can't find any syntax error here.
Any help would be appreciated as why there's an error.
Error: Native Queries aren't supported by this value.
WITH POLICIES AS(
SELECT DISTINCT a.POLICY_NUMBER
,c.DST
,d.DOB
,b.ENROLLED_RPM
,b.RATED_STATE
,a.EVENT_TIMESTAMP
FROM PD_PRESENTATION.CUSTOMER.REQUEST_FLOW_EDGE_MOBILE_TIER as a
LEFT JOIN PD_ANALYTICS.SVOC.POLICY as b
ON a.POLICY_NUMBER = b.POLICY_NUMBER
LEFT JOIN PD_ANALYTICS.SVOC.POLICY_HAS_POLICYHOLDER_PERSON as c
ON b.ID = c.SRC
LEFT JOIN PD_ANALYTICS.SVOC.PERSON as d
ON d.ID = c.DST
WHERE a.USER_GROUP = 'Customer'
AND b.STATUS = 'InForce'
),
MaximumTime AS(
SELECT a.POLICY_NUMBER
,MAX(a.EVENT_TIMESTAMP) as MAXDATED
FROM POLICIES as a
GROUP BY a.POLICY_NUMBER
)
SELECT DISTINCT a.*
,b.DOB
,b.ENROLLED_RPM
,b.RATED_STATE
,c.PAPERLESSPOLICYSTATUS
,c.PARTIALPAPERLESSSTATUS
,c.PAYPLAN
,MAX(c.TENUREPOLICYYEARS) as TENURE
FROM MaximumTime as a
LEFT JOIN POLICIES as b
ON a.POLICY_NUMBER = b.POLICY_NUMBER
LEFT JOIN PD_POLICY_CONFORMED.PEAK.POLICY as c
ON a.POLICY_NUMBER = c.POLICY_NUMBER
GROUP BY a.POLICY_NUMBER
,a.MAXDATED
,b.DOB, b.ENROLLED_RPM
,b.RATED_STATE
,c.PAPERLESSPOLICYSTATUS
,c.PARTIALPAPERLESSSTATUS
,c.PAYPLAN
Based on googling I suspect that this is caused by the driver you are using (odbc).
If SQL is running fine in snowflake it means it's syntax is correct and there must be an error somewhere between powerbi and snowflake, rather than in your code.
You can try to execute your query and look at the query history in snowflake to check what is actually being executed on snowflake.
https://docs.snowflake.com/en/sql-reference/functions/query_history.html
SnowFlake & PowerBI "native queries aren't support by this value"
Maybe it is lowercase / uppercase issue as explained here:
https://community.powerbi.com/t5/Issues/Unable-to-query-case-sensitive-Snowflake-tables/idi-p/2028900
In debugging process I would advise you to pinpoint which part of query causes the error. It could be quotes you are using in first CTE, non uppercase table names, * character.

SSAS - cube filtering on import

I am new to SSAS and after trying for hours to solve this problem I asking here.
I have a msOLAP cube that I want to import into SSAS PowerBI
but due to large database I want to pre-filter it befor importing.
The cube has measures in cpe_fact table and many other dimensions i.e. dim_time, dim_product, dim_material etc...
What I am trying to achieve is getting all the fields from the fact table joined with a subset of dimensions (i.e. only dim_time and dim_product) and filter them by date (i.e. cpe_fact.sale_date < now-6 month)
I tried to put this as MDX query but could not get any data by usin this MDX:
SELECT
{ [CPE_FACT].[MAIN].[SALES_Q]} ON COLUMNS,
{ [Selected_Date].[POSTING_DATE] } ON ROWS
FROM [CPE_Analytics]
I get this error: cube either does not exist or has not been processed evewn before I had a chance to define the WHERE part.
I tried DAX :
evaluate(filter('CPE_FACT', [AGENT] >= "26003"))
it worked but only for CPE_FACT table but i didnt understant how to join with the other dimensions...
My Question: How can I import some Facts Join Few dimentions from the Cube?
Example SSAS Connection -
Instead of using an MDX/DAX query use Power Query editor in two steps :
chose the tables you want to import (cpe_fact,dim_time and dim_product).
apply a filter on the date column in the fact table (cpe_fact) to load the desired resultes.
visit : https://radacad.com/only-get-the-last-few-periods-of-data-into-power-bi-using-power-query-filtering

Unable to view Dataframes in Databricks

I have created the following dataframes in databricks:
salebycountry = spark.read.csv("/FileStore/tables/SalesByCountry.csv",inferSchema=True,header=True)
stock = spark.read.csv("/FileStore/tables/Data_Stock.csv",inferSchema=True,header=True)
model = spark.read.csv("/FileStore/tables/Data_Model.csv",inferSchema=True,header=True)
salesdetails = spark.read.csv("/FileStore/tables/Data_SalesDetails.csv",inferSchema=True,header=True)
make = spark.read.csv("/FileStore/tables/Data_Make.csv",inferSchema=True,header=True)
sales = spark.read.csv("/FileStore/tables/Sales.csv",inferSchema=True,header=True)
However, when I try to view / load the data into Power BI with Power BI Desktop I am only able to see the dataframe if I issue the command .write.saveAsTable(). For example, if I want to see the dataframe called 'model', I need to write the following code model.write.saveAsTable('Model').
I've never had to do that in the past to view dataframes. I'm wondering if its because in this case I uploaded the data(csv) into databricks as opposed to ingesting the data via SQL server? But I'm not sure.

Create charts from SQL query

I want to create a chart from a join sql query between 2 tables in superset.
for example , I go to SQL Lab and execute this query :
select film, count("film") from rental r, payment p where r.rental_id=p.rental_id group by("film") order by count("film") limit 20;
This returns me a result but how to insert in a chart?
How to create chart from SQL query ?
In order to visualize the results from a query executed in SQL Lab, you first need to click on Explore (underneath the Results tab).
Once you are in exploration mode, you can change the "Visualization Type", under "Datasource & Chart Type".