How to add NOLOCK for Auto generated SQL queries in Loopback 3 - loopbackjs

I'm using Loopback 3 and SQL. We've 20 million rows in SQL tables and when we query data using Loopback it's taking a lot of time and further observation we found queries are blocking in SQL. Noticed that the Loopback auto generated queries doesn't have any WITH (NOLOCK). How to add WITH (NOLOCK) for every SELECT query?

Using Transaction.READ_UNCOMMITTED would produce WITH (NOLOCK).
For example:
YourModel.beginTransaction({isolationLevel: YourModel.Transaction.READ_UNCOMMITTED}, (err, tx) => {
// Now we have a transaction (tx)
// Write the queries here
// Then run commit the transaction:
tx.commit(err => {});
});
See the docs for more details.

Related

Create PowerBI Datamart from Azure Analysis Service

I am trying to create PowerBI Datamart from Azure Analyis service. There is a datamodel available in the Azure Analysis Service and I can connect using URL and Database Name. The datamodel has ~100 tables present in it and relationship also setup. So my question is, if I want to create a PowerBI datamart from the Azure Analyis service datamode, I need to do the Get Data option of PowerBI datamart and connect to Azure Analyis service, select table, select fields 100 time for getting all the tables of Azure Analyis service datamode into my PowerBI datamart? Is there any import function available where I can import all the tables in a single time?
Why do you want to copy data from AAS into a database?
The reason you find it difficult is that it's an odd thing to do. The query designer for AAS/SSAS generates MDX queries which are indented to run aggregate queries that return a handful of rows, and are wholly unsuitable for extracting whole tables. If you try, the queries will just run forever and fail.
It's possible to extract data from AAS/SSAS tabular models, but you must use DAX not MDX, and so you need to the Power Query or "Transform Data" window, and use the advanced editor.
Each query to load a table should look like this, eg to load the 'Customer' table:
let
Dax = "evaluate Customer",
Source = AnalysisServices.Database("asazure://southcentralus.asazure.windows.net/myserver", "mydatabase", [Query=Dax])
in
Source

How to merge (append) data from 2 live models?

I have 2 datasets deployed to power bi portal.
On a report I can connect to 1 dataset using live connection, and then convert the connection to DQ to also connect to the 2nd dataset.
Then I can create relationship between the model travels. How to merge (append) data from 2 live models?
Today you can't.
You can leave the tables separate and write DAX measures that operate over both tables.
But if you try creating a DAX calculated table that appends the two, refresh will fail in the service, as this scenario is not currently supported.
Instead of using Direct Query to Power BI Datasets, if the Datasets are on Premium Capacities, you can import tables from both models using the Analysis Services data source and an explicit DAX query. eg
let
Source1 = AnalysisServices.Database("powerbi://api.powerbi.com/v1.0/myorg/someworkspace", "AdventureWorksDW", [Query="evaluate DimCustomer", Implementation="2.0"]),
Source2 = AnalysisServices.Database("powerbi://api.powerbi.com/v1.0/myorg/someworkspace", "AdventureWorksDW2", [Query="evaluate DimCustomer", Implementation="2.0"]),
Appended = Table.Combine({Source1, Source2})
in
Appended

Loopback 3.x Transaction over two data sources

I've used Higher-level Transaction API of Loopback, but it supports only one datasource, as you start the transaction with
await app.dataSources.db.transaction(async models => {});
I have two data sources pointing to two separate databases on the same mysql database server, and I would like to use write into two tables, each residing in separate database, in the same transaction. Is it possible to achieve it somehow in Loopback? Maybe using Lower-level Transaction API? Any experience?

Execute A set of queries as Batch In AW Athena

I'm trying to execute AWS Athena queries as batch using aws-java-sdk-athena. I'm able to establish the connection,run individually the queries, but no idea how to run 3 queries as batch. Any help appreciated.
Query
1.select * from table1 limit 2
2.select * from table2 limit 2
3.select * from table3 limit 2
You can run multiple queries in parallel in Athena. They will be executed in background. So if you start your queries using e.g.
StartQueryExecutionResult startQueryExecutionResult = client.startQueryExecution(startQueryExecutionRequest);
you will get an executionId. This can be then used to query the status of the running queries to check if they finished already. You can get the execution status of the query using getQueryExecutionId or batchGetQueryExecution.
Limits
There are some limits in Athena. You can run up to 20 SELECT queries in parallel.
See documentation:
20 DDL queries at the same time. DDL queries include CREATE TABLE and CREATE TABLE ADD PARTITION queries.
20 DML queries at the same time. DML queries include SELECT and CREATE TABLE AS (CTAS) queries.

Select 20% random records from 1 million record Django mysql

I want to select some x % from a mysql Django model. for example if there is 1 million device id stored into the table I want to select 20% of 1 million device id.
Can any one help me with this with optimised way to do it. I am looking to achieve this using Django ORM
if you want to use orm, i think you just request count(), and then create list of pk by using loop. and at the end you just use Model.objects.filter(pk__in=<list of pk>).