Refresh Teiid's SYS.* tables - teiid

I'm using Teiid 9.0.2 for data virtualization. I've created Datasource, deployed VDB and then created new table in datasource (MySQL/Postgresql). The new table isn't listed in VDB's SYS.Tables and SYS.Columns. Is there any way to refresh these tables?

You will need to restart Application Server or re-deploy VDB. It will reload metadata when you redeploy VDB.
ExecutionFactory.getMetadata(...) loads the meta-data and it is invoked on deployment of VDB

Related

How can i access metadata db of GCP Composer Airflow server?

I have created one Composer in gcp project. I want to access the Metadatadb of Airflow which runs at background on Cloud SQL.
How can i access that?
Also i want to create one table inside that metadatadb which i will be using to store some data query by one of airflow dag. Is it ok to create any table inside that metadatadb or that metadatadb is only for airflow server use?
You can access Airflow internal DB via UI using Data Profiling -> Ad Hoc Query
There you can see all the tables with a SQL query like :
SHOW tables;
I wouldn't recommand creating a new table or manually inserting rows into existing tables thought.
You should also be able to access this DB in your DAGs operators and sensors by using airflow-db connexion.

How can I update PBI Cloud App without creating a new one

I have uploaded a report to PBI Cloud with several bookmarks and different data models like tables and different charts. Now I made some changes to this report and published it by replacing the existing one. Some charts were deleted and some new were newly added. When navigating directly to the report in PBI cloud I can see the changes. But the changes are not applied for the App which is connected with that report.
Is there any further step needed to perform so that the models in the PBI Cloud "App" get also updated?
In the workspace there is an Update app button, that you need to click,
This will take you through the process of republishing the app

PowerBI report service - data flow questions

This is what I am trying to do: I have various SQL server databases with data. I created views in all of them. All views will need to be imported, and I specify their relationships. I want this to be refreshed nightly. I want to build various reports of the same data source.
Do I have to use a PowerBI desktop application to import data into PowerBI Report Service? [I have done this so far, but then can create new reports in the cloud on existing data. It would make sense to connect directly from PowerBI report service to my SQL servers.]
Once I uploaded data using a desktop application (as I have done so far), how can I view the data model in the report service once it is uploaded in the cloud?
In order to get routinely refreshed data I need to setup a gateway. Is the local PowerBI desktop application still involved in this process, or could I [in theory] delete the local desktop application that pushed the data in initially?
For your questions:
You have two options, use PBI Desktop to connect to the data using import/direct query, then load it to the service. You can use dataflows to create an import based on your views, but you will then need to create reports from those. Using dataflows, you'll have to set up a refresh schedule, then for the dataset(s) built on top of those, you'll have to set another refresh schedule.
You will be limited to the dataset sizes of 1GB for the workspace if importing data. You cannot use direct query on dataflows (unless you have enhanced compute with PBI premium). Once the dataset is loaded, you can then create new reports in the service or via desktop on top of that dataset. If possible it is recommended to use direct query.
To see the data model, you can use desktop to connect to PBI Service Dataset. This will connect in 'Live Connection' mode, and will be limited to that one dataset, you can't add others to it, Excel, CSV, SQL etc. You can also use Analyse in Excel, a plugin for Excel, that can connect to the data model. You can create new reports in the service for existing data models as well.
When creating the report in PBI Desktop it does not use the Gateway, you connect to your data sources as normal, then once you load the dataset to Power BI it will match the data sources in the file to the ones set up in the Gateway Admin settings. So you will still need PBI Desktop to create reports, but the gateway is there for the refreshing. The Desktop is not used in the process for refreshing. You could delete the workbook or application, but if you have to make changes, what will you refer to? (You could download a copy of the report from the service).+ It is easier to make changes in the desktop app, then the service, as there is a feature difference between dataset creation in the desktop vs service.

WSO2 API Manager - migration from H2 to MySQL : one script is missing

In wso2am/repository/conf/datasources/*.xml, I can see 5 datasources :
<name>WSO2_CARBON_DB</name>
<name>WSO2AM_DB</name>
<name>WSO2AM_STATS_DB</name>
<name>WSO2_MB_STORE_DB</name>
<name>WSO2_METRICS_DB</name>
But in wso2am/dbscripts, I can find only 4 scripts for 4 databases (no script for WSO2AM_STATS_DB).
Is WSO2AM_DB supposed to stay an H2 database in production ? or should it point to an existing database ?
You don't need to create tables in WSO2AM_STATS_DB manually. Just creating a database is enough. Tables are automatically created by analytics scripts.
Table creation of the statistics database is handled by the Analytics
scripts when you configure APIM Analytics, so you will create the
statistics database in this step but will not specify a source script.
Ref: https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0

Configuring WSO2 STATS_DB

I have configured API Manager 2.0.0 & API Manager Analytics Pack to use MySQL databases.
For each server, there exists a WSO2AM_STATS_DB. I have given these differing names on my MySQL server. I have also pointed my datasources in master-datasources.xml(for APIM) & stats-datasources.xml(for Analytics) to the relevant databases.
I couldn't find any relevant schema(dbscripts) for these databases in their respective packs.
On running, the Analytics database is populated but the APIM database isn't and throws an exception. The Analytics database not only gets the schema but also the invocation details of my API.
I am unable to get the stats on my dashboard though.
Previously, I (unwittingly) configured the h2-repository stats database to be the same for both servers (due to the folder structure) and was able to get all the statistics on my dashboard in the publisher.
Other configurations I have tried :
On the MySQL Server, pointed it to the same database (the Analytics one with the schema) but with no results on my dashboard (after waiting for a while).
Both datasources (WSO2AM_STATS_DB) in 2 servers should be pointed to the same database. There are no database scripts for this. Tables are created automatically.
By default in both servers, Stats DB path comes like this. (note ../ part)
<url>jdbc:h2:../tmpStatDB/WSO2AM_STATS_DB;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000;AUTO_SERVER=TRUE</url>
So if you extract both servers to the same directory as mentioned in this doc, both datasources will be pointing to the same database (inside tmpStatDB) like this.
/parent_dir
|__wso2am-2.0.0/
|__wso2am-analytics-2.0.0/
|__tmpStatDB/
So, what happens here is, wso2am-analytics writes stats data to shared database, then apim reads it and shows data on its databases.