WSO2 DSS - executing operation on backup database if query failed - wso2

I have a problem with making dss use backup database if original one is down. The idea is, there are 2 databases, the second db is the clone of the first one. I want to have operation which is trying to execute query on the first db, if it fais (because for example first db is unavailable ) it tries to execute the same query on the 2nd database. How can I achieve such result?

Related

Is there a way to tell when AWS Amplify Datastore is initialized or ready to be queried?

I have an application that needs to update the UI with the results of an Amplify Datastore query. I am making the query as soon as the component mounts/renders, but the results of the query are empty even though I know there is available data. If I add a timeout of 1 second or greater before making the query, then the query returns the expected data. My hunch is that this is because the query is returning an empty set of data before the response from the delta sync table, which shows there is data to be fetched, is returned.
Is there any type of event provided by Datastore that would allow me to wait until the data store is initialized or has data to query before making the query?
I understand that I could use the .observe functionality of datastore for a similar effect, but this is currently not an option.
First, if you do not use the Datastore start method then sync from the backend starts when the first query is submitted. Queries are run against the local store so data won't be there yet.
Second, Datastore publishes events on the amplify hub so that you can monitor changes, such as a set of data being synced, Datastore being ready and even Datastore being ready and all data synced locally.
See the documentation on Datastore.start
and the documentation for Datastore events for more information.

Issue with Informatica Loading into Partitioned Oracle Target Table

I m facing a issue in regard to loading into Partitioned Oracle Target table.
We have 2 sessions having same table in Oracle as Target
a. INSERT data into Partition1
b. UPDATE data in Partition2
We are trying to achieve parallelism in the workflow, and there are more Partitions and sessions to be created for different data but into same table, but different partitions..
Currently when we run both these sessions parallely, the Update session runs successfully, but the INSERT session gets a NOWAIT error.
NOTE: both are loading data for different partitions.
we made the mapping logic into 2 differnt stored procedures(one does INSERT, and another UPDATE), and they run parallely without any lock when executed from DB directly.
We tried mentioning the partition name in Target override too. but with same result.
Can you advice what are the alternatives we have inorder to achieve parallelism into same target table from Informatica.
Thanks in advance

API Gateway generating 11 sql queries per second on REG_LOG

We have sysdig running on our WSO2 API gateway machine and we notice that it fires a large number of SQL queries to the database for a minute, than waits a minute and repeats.
The query looks like this:
Every minute it goes wild, waits for a minute and goes wild again with a request of the following format:
SELECT REG_PATH, REG_USER_ID, REG_LOGGED_TIME, REG_ACTION, REG_ACTION_DATA
FROM REG_LOG
WHERE REG_LOGGED_TIME>'2016-02-29 09:57:54'
AND REG_LOGGED_TIME<'2016-03-02 11:43:59.959' AND REG_TENANT_ID=-1234
There is no load on the server. What is causing this? What can we do to avoid this?
screen shot sysdig api gateway process
This particular query is the result of the registry indexing task that runs in the background. The REG_LOG table is being queried periodically to retrieve the latest registry actions. The indexing task cannot be stopped. However, one can configure the frequency of the indexing task through the following parameter that is in the registry.xml. See [1] for more information.
indexingFrequencyInSeconds
If this table is filled up, one can clean the data using a simple SQL query. However, when deleting the records, one must be careful not to delete all the data. The latest records of each resource path should be left in the REG_LOG table since reindexing of data requires at least one reference of each resource path.
Also, if required, before clearing up the REG_LOG table, you can take a dump of the data in case you do not want to loose old records. Hope this answer provides information you require.
[1] - https://docs.wso2.com/display/Governance510/Configuration+for+Indexing

Web service input into SQL query into R in Azure ML

I have the following simple setup in Azure ML.
Basically the Reader is a SQL query to a DB which returns a vector called Pdelta, which is then passed to the R script for further processing and the results are then returned back to the web service. The DB query is simple (SELECT Pdelta FROM ...) and it works fine. I have set the DB query as a web service paramater as well.
Everything seems to work fine, but at the end when i publish it as a web service and test it, it somehow asks for an additional input parameter. The additional parameter gets called PDELTA.
I am wondering why is this happening, what is it that I am overlooking? I would like to make this web service ask for only one parameter - the SQL query (Delta Query) which would then deliver the Pdeltas.
Any ideas or suggestions would be grealty appreciated!
You can remove the web service input block and publish the web service without it. That way the Pdelta input will be passed in only from the Reader module.

How to check whether sqlite database is attached or not?

I am using sqlite to store my data. I have two databases. In my application, each time a new request comes, I am attaching first db to second db. The problem is, if two request come it is showing the db already in use (it is trying to attach twice with same alias name 'db'). I want to know if there is any way to check whether a database is attached or not?
PRAGMA database_list;
outputs a resultset with full list of available databases. The first column is database name, the second is database file (empty if it is not associated with file). The primary database is always named main, temporary db is always temp.
sqlite> attach "foo.db" as foo;
sqlite> pragma database_list;
0|main|
2|foo|/Users/me/tmp/foo.db
I assume you are reusing the same connection to the database for multiple requests. Because databases are attached to the connection object, attaching fails for the second or further requests with the same connection. The solution I think is thus to attach the database immediately after a new connection is made, and not each time a request is received.