I'm trying to reduce my dataset of 1.000.000 records to only the subset I need (+/- 500) by creating an Inner Join to a different table. Unfortunataly it seems that Power Query drops the "native query" and loads the entire dataset before reducing it by merging it with a related table. I have no access to the database unfortunately, otherwise I would have written the SQL myself. Is there a way to make merge work with a native SQL query?
Thanks
I would first check that your "related table" query can run as a native query - right-click on it's last step and check if View Native Query is enabled.
If that's the case, then it may be due to the Join Kind in the Merge Queries step. I've noticed that against SQL Server data sources, Join Kinds other than the default Left Outer Join tend to kill the Native Query option.
Related
I am trying to create a table that is a list of all possible combinations between two tables: products and companies. I'm building a dashboard in Power BI and the data comes from SQL queries. I have the following list structures for the Products table and the Companies table:
and my desired output for analysis is:
There is nothing that relates the companies to the products, and I'm trying to get a list of all of the products for each company. Can I do this in Power BI? If not, is it possible in SQL (there is nothing to join on)? Thank you for your help!
You can do this with a 'hacked' join, or a cross join. I prefer the former from a process POV, but cannot speak to speed or efficiency.
Using a Join ( Merge in PQ )
Create a new column 'DummyKey' with a value of 1 on each table.
Merge both tables using your 'DummyKey' columns.
Complete the Join process and choose the columns you want to bring through.
Cross Joins in PowerQuery according to MS
I think this 'cross join' is only technically right and does not provide future flexibility.
Both methods will get you to the same end point, and can be done in SQL or PQ.
I have a powerbi report for finance. Users need to see the latest data in real-time, so I have to choose DirectQuery. But in this mode, some functions, such as DateAdd and DatesMtd, cannot be used
(This DAX function is not supported for use in DirectQuery mode.),
So I need to write a very complex SQL statement to achieve the equivalent effect, but this makes the report very slow (more than 10 seconds) every time it runs, and the largest table in my data model is less than 80000 rows. I've tried to optimize the SQL statements, but it doesn't help. Any solution?
(I use powerbi report server with sqlserver enterprise version)
Of course, with no information, I can't know what's taking so much time, but in order to understand what's happening you can use the following tools:
PowerBI Performance Analyzer: This will tell you what part is taking the most time. for more info see MSDocs & SQLBI
Check the datamodel and the storage mode of each table involved (ie: fact table, calendar, customer, etc). When querying the source, PBI won't use filters (directly in the query) that come from tables in import mode. (search for "composite models" on the web)
Limit the number of objects, for each object in the dashboard a query will be sent to the datasource, limiting the number of objects might help. (remember that objects wait for each other, so one slow loading object might cause your problem)
(even if you probably already did it) Have a look at the query execution plan, you can also check it for queries automatically created by PowerBI by capturing them (the easiest way is to use SQL Server Profiler)
I think that just by using PowerBI Performance Analyzer you will be able to see where the problem is, and then do more accurate search about it.
You need to search for those keywords;
Native query in power query: Some M language functions can directly be translated to SQL, so that all transformation happens in sql server side.
Aggregated tables in model view: aggregated views can be added for specific needs of visuals. Ex: if a visual has product category and amount as value you can connect aggregated sql table to the original one so that visual can pick up the value faster.
Hybrid tables: import mode and DQ mode can be used together. so you can use DQ for daily data and import mode for older data together.
For background, I come from a SQLServer background and make heavy use of the system tables & information_schema, to tell me all about my tables and columns.
I didn't expect the exact same power in Athena, but currently very shocked and frustrated with what little seems to be available - unless I've missed something ?
For example, 'describe mytable' - just describes 1 table at a time.
How about showing the columns for ALL tables in one result ?
It also does not output the table name, nor allow you to manually add that in as a custom column.
All the results of these "show/list/describe" commands seem to produce a text list - not a recordset, so you cannot take the results and join them to other tables or views to make more complex outputs.
Is there any other way to query the contents of my databases ?
Thanks in advance
Athena is based on Presto. Presto provides information_schema schema and I checked and it is accessible in Athena.
You can run e.g. a query like:
SELECT * FROM information_schema.columns;
to get a list of columns of all tables.
You can filter this by "database":
SELECT * FROM information_schema.columns WHERE table_schema = '<databasename>';
Note however that these types of queries are not necessarily very performant.
I have a SQL query to get the data into Power BI. For example:
select a,b,c,d from table1
where a in ('1111','2222','3333' etc.)
However, the list of variables ('1111','2222','3333' etc.) will change every day so I would like the SQL statement to be updated before refreshing the data. Is this possible?
Ideally, I would like to keep a spreadsheet with a list of a values (in this example) so before refresh, it will feed those parameters into this script.
Another problem I have is the list will have a different nr of parameters so the last variable needs to be without a comma.
Another option I was considering is to run the script without the where a in ('1111','2222','3333' etc.) and then load the spreadsheet with a list of those a's and filter the report down based on that list however this will be a lot of data to import into Power BI.
It's my first post ever, although I was sourcing help from Stackoverflow for years, so hopefully, it's all clear.
I would create a new Query to read the "a values" from your spreadsheet. I would set the Load To / Import Data option to Only Create Connection (to avoid duplicating the data).
Then in your SQL query I would remove the where clause. With that gone you actually don't need to write custom SQL at all - just select the table/view from the Navigation UI.
Then from the the "table1" query I would add a Merge Queries step, connecting to the "a values" Query on the "a" column, using the Join Type: Inner. The resulting rows will be only those with a matching "a" column value (similar to your current SQL where clause).
Power Query wont be able to send this to your SQL Server as a single query, so it will first select all the rows from table1. But it is still fairly quick and efficient.
I am looking for solutions or ideas how to speed the processing of large data sets in sas.
What would you recommend?
What is better data step or proc sql procedure?
Speeding up your data processing depends on where your data is saved.
Your data can be either in:
SAS Table,
Database Table (Miscrosfot SQL, Oracle, DB2, MYSQL, ..
etc.)
Use SAS Data Step when:
You are querying/processing SAS tables,
You want to do iterative
processing (ex. retaining values or using arrays).
Use Proc SQL when:
You are querying a large Database table,
You can do a SQL "Pass Through" where you send SQL code to be
executed on the DB server and only the output is sent to SAS (instead
of bringing the entire tables through the network to SAS and then filter it),
You want to query SAS Tables but prefer SQL joins to data step merges.
Another topic you should consider is efficiency programming; where you are optimising your query and look-ups.
I find Proc SQL to be better for my use cases. We may need some more specifics on the size and variety of data your trying to join/export etc.
Give us some info on that and we can try to help.
Tips:
Limit the fields your pulling over
Subset data
Anecdotally from my experience Proc SQL seems faster.
Here are two tips on speeding up queries with Proc SQL:
In general, you want to rule out as much data as possible when querying. If you are usingProc SQL, the order of the restrictions in the where clause matters. Put the most restrictive parts first.
For example, if I'm querying a database for teachers with the last name "JONES", that were hired after Jan 2005, I would structure my where clause like this: where last_name = 'JONES' and hire_date > 200501 I would do this because last name is likely to exclude more records than the hire date restriction.
When possible, don't use Select * instead, list out the specific columns that you need. Remember, even if you are doing a calculation with a column, you don't have to include that column in your select statement.
Here is a very useful resource for understanding how to use proc sql efficiently. I recommend reading it in it's entirety if you do a lot of work with large data sets in SAS.
http://www2.sas.com/proceedings/sugi29/127-29.pdf