cube.js: send raw sql query to cube server - cube.js

Is it possible to send a raw sql request to a cubejs server? The cubejs server would send it as is to the db it is connected to? I have some edge cases where I would like to let the client send raw sql requests instead of using the cube query format.

The Cube.js way to working with a database would be to create a data schema and write queries in Cube.js format. Then, Cube.js will take care of SQL generation and execution. While, technically, you can have quite complex statements in your cubes’ SQL definitions, you will have to use Cube.js query format, anyway.

Related

What is the underlying connection mode when using live connection? (import or direct query mode)

Live connections
When connecting to SQL Server Analysis Services,
there's an option to either import data from or connect live to, the
selected data model. If you use import, you define a query against
that external SQL Server Analysis Services source, and the data is
imported as normal. If you use connect live, there's no query defined,
and the entire external model is shown in the field list.
My understanding was that import/direct query can be used to query a data source like SQL server. Where as live mode is used to connect to an existing power bi dataset, or SSAS, or azure analysis service.
The above quote says When connecting to SQL Server Analysis Services, there's an option to either import data from or connect live to, the selected data model. - So does this mean that the live mode allows us to choose between import/direct query to the live model?
When connecting to SQL Server Analysis Services, there's an option to
either import data from or connect live to, the selected data model.
This quote means that you do not have to use a live connection with SSAS models. Instead you can construct a query and import the data. The dataset would be limited to whatever was in the query. If you use a live connection, you have access to the entire model.

Optimize data load from Azure Cosmos DB to Power BI

Currently we have a problem with loading data when updating the report data with respect to the DB, since it has too many records and it takes forever to load all the data. The issue is how can I load only the data from the last year to avoid taking so long to load everything. As I see, trying to connect to the COSMO DB in the box allows me to place an SQL query, but I don't know how to do it in this type of non-relational database.
Example
Power BI has an incremental refresh feature. You should be able to refresh the current year only.
If that still doesn’t meet expectations I would look at a preview feature called Azure Synapse Link which automatically pulls all Cosmos DB updates out into analytical storage you can query much faster in Azure Synapse Analytics in order to refresh Power BI faster.
Depending on the volume of the data you will hit a number of issues. First is you may exceed your RU limit, slowing down the extraction of the data from CosmosDB. The second issue will be the transforming of the data from JSON format to a structured format.
I would try to write a query to specify the fields and items that you need. That will reduce the time of processing and getting the data.
For SQL queries it will be some thing like
SELECT * FROM c WHERE c.partitionEntity = 'guid'
For more information on the CosmosDB SQL API syntax please see here to get you started.
You can use the query window in Azure to run the SQL commands, or Azure Storage Explorer to test the query, then move it to Power BI.
What is highly recommended is to extract the data into a place where is can be transformed into a strcutured format like a table or csv file.
For example use Azure Databricks to extract, then turn the JSON format into a table formatted object.
You do have the option of using running Databricks notebook queries in CosmosDB, or Azure DataBricks in its own instance. One other option would to use change feed to send the data and an Azure Function to send and shred the data to Blob Storage and query it from there, using Power BI, DataBricks, Azure SQL Database etc.
In the Source of your Query, you can make a select based on the CosmosDB _ts system property, like:
Query ="SELECT * FROM XYZ AS t WHERE t._ts > 1609455599"
In this case, 1609455599 is the timestamp which corresponds to 31.12.2020, 23:59:59. So, only data from 2021 will be selected.

PostgreSQL with Django: should I store static JSON in a separate MongoDB database?

Context
I'm making, a Django web application that depends on scraped API data.
The workflow:
A) I retrieve data from external API
B) Insert structured, processed data that I need in my PostgreSQL database (about 5% of the whole JSON)
I would like to add a third step, (before or after the "B" step) which will store the whole external API response in my databases. For three reasons:
I want to "freeze" data, as an "audit trail" in case of the API changes the content (It happened before)
API calls in my business are expensive, and often limited to 6 months of history.
I might decide to integrate more data from the API later.
Calling the external API again when data is needed is not possible because of 2) and 3)
Please note that the stored API responses will never be updated and read performance is not really important. Also, being able to query the stored API responses would be really nice, to perform exploratory analysis.
To provide additional context, there is a few thousand API calls a day, which represent around 50GB of data a year.
Here comes my question(s)
Should I store the raw JSON in the same PostgreSQL database I'm using for the Django web application, or in a separate datastore (MongoDB or some other NoSQL database)?
If I go with storing the raw JSON in my PostgreSQL database, I fear that my web application performance will decrease due to the database being "bloated" (50Mb of parsed SQL data in my Django database are equivalent to 2GB of raw JSON from the external API, so integrating the full API responses in my database will multiply its size by 40)
What about cost? as all this is hosted on a DBaaS. I understand that the cost will increase greatly (due to the DBs size increase), but is any of the two options more cost effective?

WSO2 data services server. Is it possible that a query can have sort of "dynamic data source"

I am using data services server. Is it possible that a query can have sort of "dynamic data source" ? Because our company have multiple database , so the same query have to create one time for each database.
Below, data source is fixed:

C++: Converting the resultset of a SQL query into JSON

Is there a standard way/well known method to convert the results of a SQL query, using the MySQL client library, or otherwise into JSON so I can directly pass the results to a JS script?
Before the obvious, no, I'm not allowing SQL queries directly from the browser, I'm implementing a specific subset of SQL in a simple API to expose to clients who will be retrieving the results using AJAX, I figured JSON is the best encoding, just wanted to check and see if there was already a well known way of doing this before I wrote my own.
Thanks!
You may take a look at:
http://weblogs.asp.net/thiagosantos/archive/2008/11/17/get-json-from-sql-server.aspx
It provides a recipe to get JSon from SQL.
My the way, I assume that it is MS SQL as you don't specify.
Good Luck