Teiid embedded cache max entries - teiid

We are using query caching with Teiid embedded. But it is limited to 1024 resultset.
In teiid standalone we can change this property in the file standalone-teiid.xml.
But how can we change it in teiid embedded?

It depends on how you initialized the embedded engine, the cache factory you set as shown at [1]. The default is defined if you not configured explicitly [2]. If you are using the Infinispan cache store then it depends on the Infinispan configuration [3].
[1]https://github.com/teiid/teiid/blob/master/runtime/src/main/java/org/teiid/runtime/EmbeddedConfiguration.java#L143
[2]https://github.com/teiid/teiid/blob/master/runtime/src/main/java/org/teiid/runtime/EmbeddedServer.java#L409
[3] https://github.com/teiid/teiid/blob/master/wildfly/cache-infinispan/src/main/resources/infinispan-config.xml

Related

In the direct query model, how to specify whether the visual will need a query to backend data sources or not?

Link: https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-many-to-many-relationships
Storage mode: You can now specify which visuals require a query to
back-end data sources. Visuals that don't require a query are imported
even if they're based on DirectQuery.
The quote says - You can now specify - how do I specify this? That is - How to specify which visuals will need a query to backend data sources or not in the direct query model?

How to update Camunda DMN table at runtime?

I have a DMN table created with a few rules and deployed to Camunda.
How do we update DMN tables programmatically at run-time and add a new rule when it is already deployed?
When you change the dmn table but keep the decision-key, a deployment will create a new revision of the table.
So yes, you can update dmn tables at runtime.
You can do so by either using the REST or the java API.
The java api relies on the RepositoryService#createDeployment Builder. The concrete implementation depends on where your files are stored, and how you read them. Here are some examples.
Deployment deployment = repositoryService.createDeployment()
.addString(resourceName, instanceAsString)
.deploy();

Dynamic Mapping changes

When there is any change in DDL of any table, we have to again import source and target definition and change mapping. Is there a way to dynamically fetch the DDL of the table and do the data copy using Informatica mapping.
The ETL uses an abstractive layer, separated from any physical database. It uses Source and Target definition that indicate what should be expected to find in DB to which the job will be connecting. Keep in mind that the same data mapping can be applied to many different source and / or target systems. It's not bound to any of them, it just defines what data to fetch and what to do with them.
In Informatica this is reflected by separating Mappings, that define data flow, and Sessions, which indicate where the logic should be applied.
Imagine you're transferring data from multiple servers. A change applied on one of them should not break the whole data integration. If the changes would be dynamically reflected, then a column added on one server would make it impossible to read data from the others.
Of course if perfectly fine to have such requirement as you've mentioned. It's just not something Informatica supports with their approach.
The only way workaround is to create your own application that would fetch table definitions, generate the Workflows and import them into Informatica prior to execution.

About WSO2 API Manager data sources

I'm performing WSO2 API manager + Analytics 2.0 POC now. When i change datasource from H2 to Oracle, in wso2am-2.0.1-SNAPSHOT, there are 2 data source config files:
master-datasources.xml & metrics-datasources.xml, according Installing and configuring the databases, there should be WSO2AM_DB, WSO2UM_DB and the WSO2REG_DB datasource configurations, but i just find WSO2_CARBON_DB & WSO2AM_DB, so my questions are
Is WSO2_CARBON_DB = WSO2UM_DB + WSO2REG_DB?
for WSO2_METRICS_DB, according Enabling Metrics and Storage Types, if we enable JDBC storage, can we store all components metrics information in one shared db or it needs one db per component(local)?
What's WSO2_MB_STORE_DB used for? from the scripts, it's for Message Store and Andes Context Store. Can we keep to use H2 in prod. cluster env.?
When i config wso2am-analytics-2.0.0-SNAPSHOT, i have below questions:
Can we share WSO2_CARBON_DB setting for both APIMGRT related components and analytics? or it's better to not share?
For WSO2AM_STATS_DB, is analytics resposible to aggregate and write to it, APIMGRT responsible to read? Which APIMGRT components need to read it?
For analytics related store, it supports RDBMS, Cassandra, HBase, but it does not support mongodb, right?
for GEO_LOCATION_DATA, What's this used for? Can we just use H2 in prod. env.?
APIM:
1) In default pack, yes. But in a production environment, it is recommended to separate them as WSO2_CARBON_DB, WSO2UM_DB and WSO2REG_DB (Please note you need WSO2_CARBON_DB too, to store local data. And this can be an h2 database)
2) You can have a shared DB
3) WSO2_MB_STORE_DB is required only if you use Advanced Throttling. Tables for this are created by APIM itself. So you don't need to run any scripts on it.
APIM Analytics:
1) You can share WSO2UM_DB and WSO2REG_DB. But don't share (local) WSO2_CARBON_DB.
2) Store and Publisher
3) See WSO2 DAS with MongoDB
4) GEO_LOCATION_DATA is used for Geolocation Based Statistics. H2 is not recommended.

can WSO2 BAM work only with Oracle DB?

I'm able to configure WSO2 BAM data source WSO2_CARBON_DB to work with Oracle DB, but I'm not able to do the same with other data sources.
Is it possible to disable Cassandra and make WSO2 BAM works only with Oracle DB, including all stored data (configuration / input data / analyzed data and so on)?
For the stat store, we use cassandra. It gives high read/write performance than RDBMS. You can not configure RDBMS instead of Cassandra.
For other DB related operations you can use any RDBMS( mssql/mysql..).
Eg: registry/user store