I was wondering if there is a way to query QuestDB for its configuration properties through SQL. I have noticed meta functions to see tables and table columns.
Querying config variables is not currently supported in the open source version
Related
I'm trying to ingest data from different tables with in same database using Data fusion Multiple database tables plugin to bigquery tables using multiple big query tables sink. I write 3 different custom SQL and add them inside the plugin section which is under "Data Section Mode" > "Custom SQL Statements".
The problem is When I preview or deploy and run the pipeline I get the error "BigQuery Multi Table has no outputs. Please check that the sink calls addOutput at some point."
What I try to figure out this problem;
Run custom SQL on database and worked properly.
Create pipelines that are specific for custom SQLs but it's like 1 table ingestion from sql server to bigquery table as sink. it worked properly.
Try different Data Section Mode under multiple database tables plugin that is Table Allow List , works but it's just insert all data with no option to transform any column or filtering. Did that one to see if plugin can reach the database and able to read data ,it can read.
Data Pipeline - Multiple Database Tables Plugin Config - 1
Data Pipeline - Multiple Database Tables Plugin Config - 2
As a conclusion I would like to ingest data from one database with multiple tables with in one data pipeline. If possible I would like to do it with writing custom sqls for each tables.
Open for any advice and try.
Thank you.
Using mongo Bi connector via Atlas as source database
Using mysql for pre-aggregation
Currently, it's not possible to use pre-aggregations if the source data is stored in MongoDB. Please see this GitHub issue for details: https://github.com/cube-js/cube.js/issues/2858
If you'd like to use pre-aggregations for performance, syncing data from MongoDB into another data store is the only viable option.
I'm installing WSO2IS 5.10.0 and I am creating five PostgreSQL databases per the column titled Recommended Database Structure in this document:
https://is.docs.wso2.com/en/next/setup/setting-up-separate-databases-for-clustering/
Actually it's six databases if you count the CARBON_DB. The five PostgreSQL databases are named as follows: SHARED_DB, USERSTORE_DB, IDENTITY_DB, CONSENT_MGT_DB and BPS_DB. I already have them configured in the deployment.toml file. I've created the databases in PostgreSQL and I have to manually execute the SQL files against each database in order to create the schema for each database. Based on the document in the link, I have figured out which SQL files to execute for four of the databases. However, I have no idea what SQL files I need to execute to create the USERSTORE_DB schema. It's got to be one of the files under the dbscripts directory but I just don't know which one(s). Can anybody help me on this one?
The CARBON_DB contains product-specific data. And by default that stores in the embedded h2 database. There is no requirement to point that DB to the PostgreSQL database. Hence you need to worry only about these databases SHARED_DB, USERSTORE_DB, IDENTITY_DB, CONSENT_MGT_DB and BPS_DB.
As per your next question, You can find the DB scripts related to USER_DB(USERSTORE_DB) in /dbscripts/postgresql.sql file. This file has tables starting with the name UM_. These tables are the user management tables. You can use those table sql scripts to create tables in USERSTORE_DB.
Refer the following doc for more information
[1]https://is.docs.wso2.com/en/5.10.0/administer/user-management-related-tables/
I am new in informatica cloud. I have list of queries ready in my table. Like below.
Now I want to take one by one query from this table which work as a source query and whatever results return which I need to load into target. All tables were already created in source and target.
I just need to copy the data based on dynamic queries which kept in my one of sql tables.
If anyone have any idea then please share your toughs with me. It great helps to me.
The source connection will be the connector to your source database and the Source Type will be query. From there it depends how you are managing your variables. See thread on Informatica Network for links to multiple examples.
Read the table like normally you would do in the cloud. Then pass each of the record into the sql transformation for execution. configure where the sql transformation has to execute and it will run the queries in the database you want.
you can use a SQL task to run dynamic SQL queries.
link to using SQL task approach: https://www.datastackpros.com/2019/12/informatica-cloud-incremental-load_14.html
In Azure SQL Datawarehouse i just used the below tsql code to enable auto statistics creation.The command ran successfully , but when i checked in database properties under option tab Auto Create Statistics is till set to False.
ALTER DATABASE MyDB SET AUTO_CREATE_STATISTICS ON;
Please let me know if i'm missing here something. I have the db_owner access for the database also.
I'm guessing that you are using SQL Server Management Studio.
I was able to reproduce the symptom by turning off and on auto_create_statistics.
The issue appears to be that the database metadata is cached in SSMS. Right-click the database name and select "Refresh" before selecting "Properties". Using this method I got the correct setting for auto_create_statistics showing up each time.
My tests were done using SSMS 17.7
(The need to refresh the database metadata can also occur when adding or removing tables, columns, etc)
You can also query sys.databases, the is_auto_create_stats_on column.