DB2 - Read Write Locks - concurrency

I am working on a web application which involves inventory management. My application uses DB2 as the database.
1) In my application, there is a module which just inserts the records. This can happen at anytime since the records are entered by customers.
2) And there is another stand-alone module which reads and updates the records entered. This module never Inserts records. It just updates existing records. This module is scheduled so it will run once an hour.
My question is, the second module can read and update the records without an issue if the first module is inserting a record at the same time? I am not referring to the record just being entered at the time but the other records in the table that needs processing. ( Bottom line is when first module inserts data, can my second module read and update data in separate rows of the same table at the same time ? )
I am very new to DB2 and heard about the locking in DB2. That is why I raised this question.
Adding the following information about my application. Both modules are written in java. Second module is a spring boot application. Operating system is windows.
Thank you in advance.

Related

Locking Behavior In Spanner vs MySQL

I'm exploring moving an application built on top of MySQL into Spanner and am not sure if I can replicate certain functionality from our MySQL db.
basically a simplified version of our mysql schema would look like this
users idnamebalanceuser_transactionsiduser_idexternal_idamountuser_locksuser_iddate
when the application receives a transaction for a user the app starts a mysql transaction, updates the user_lock for that user, checks if the user has sufficient balance for the transaction, creates a new transaction, and then updates the balance. It is possible the application receive transactions for a user at the same time and so the lock forces them to be sequential.
Is it possible to replicate this in Spanner? How would I do so? Basically If the application receives two transactions at the same time I want to ensure that they are given an order and that the changed data from the first transaction is propagated to the second transaction.
Cloud Spanner would do this by default since it provides serializability which means that all transactions appear to have occurred in serial order. You can read more about the transaction semantics here:
https://cloud.google.com/spanner/docs/transactions#rw_transaction_semantics

Issue with Informatica Loading into Partitioned Oracle Target Table

I m facing a issue in regard to loading into Partitioned Oracle Target table.
We have 2 sessions having same table in Oracle as Target
a. INSERT data into Partition1
b. UPDATE data in Partition2
We are trying to achieve parallelism in the workflow, and there are more Partitions and sessions to be created for different data but into same table, but different partitions..
Currently when we run both these sessions parallely, the Update session runs successfully, but the INSERT session gets a NOWAIT error.
NOTE: both are loading data for different partitions.
we made the mapping logic into 2 differnt stored procedures(one does INSERT, and another UPDATE), and they run parallely without any lock when executed from DB directly.
We tried mentioning the partition name in Target override too. but with same result.
Can you advice what are the alternatives we have inorder to achieve parallelism into same target table from Informatica.
Thanks in advance

API Gateway generating 11 sql queries per second on REG_LOG

We have sysdig running on our WSO2 API gateway machine and we notice that it fires a large number of SQL queries to the database for a minute, than waits a minute and repeats.
The query looks like this:
Every minute it goes wild, waits for a minute and goes wild again with a request of the following format:
SELECT REG_PATH, REG_USER_ID, REG_LOGGED_TIME, REG_ACTION, REG_ACTION_DATA
FROM REG_LOG
WHERE REG_LOGGED_TIME>'2016-02-29 09:57:54'
AND REG_LOGGED_TIME<'2016-03-02 11:43:59.959' AND REG_TENANT_ID=-1234
There is no load on the server. What is causing this? What can we do to avoid this?
screen shot sysdig api gateway process
This particular query is the result of the registry indexing task that runs in the background. The REG_LOG table is being queried periodically to retrieve the latest registry actions. The indexing task cannot be stopped. However, one can configure the frequency of the indexing task through the following parameter that is in the registry.xml. See [1] for more information.
indexingFrequencyInSeconds
If this table is filled up, one can clean the data using a simple SQL query. However, when deleting the records, one must be careful not to delete all the data. The latest records of each resource path should be left in the REG_LOG table since reindexing of data requires at least one reference of each resource path.
Also, if required, before clearing up the REG_LOG table, you can take a dump of the data in case you do not want to loose old records. Hope this answer provides information you require.
[1] - https://docs.wso2.com/display/Governance510/Configuration+for+Indexing

Is it possible to trigger Informatica workflow using a data in database table?

I am a newbie in ETL and will be using Informatica soon for one of the requirements we have.
The requirement is that Informatica needs to monitor a table in Oracle for certain "trigger data" and as soon as that data is available in that table, Informatica should start executing steps in its workflow.
Is it possible to do this? If yes, could someone please point me to a link/document where this is explained.
Many thanks.
No, it is not possible (checked in PowerCenter 9.5.1).
The Event-Wait task supports only two types of events:
predefined events (the task instructs the Integration Service to wait for the specified indicator file to appear before continuing),
user-defined events (the event is triggered by an Event-Raise task somewhere in the workflow).
Yes it is possible and you will be needing a script that can be created with following steps.
--create a shell script that checks if data is present in table on not you can use this just by taking count of the table
--if count is grater than create an empty file say DUMMY.txt (by using touch command) at a specified path.
--in you Informatica scheduling either by scheduler or by script check every 5 mins if file is present.
--if file is present call you Informatica workflow and delete the DUMMY file.
--once workflow is completed start the process again.

Best way to globally use a single file across a network

I am creating an app for a company. This company has one server with multiple databases. Each company location uses their own database on this single server. Neither can see the others database.
They all want a dictionary for adding words through spellcheck. These words will be saved to a lexicon file.
I as the programmer want this lexicon file to reside on the server and then deploy a copy to the client machines on program startup. My question is what would be the best option in terms of getting the new added words back into this parent lexicon file and then subsequentially updating the client on a file changed event.
Would a webservice with a Filesystemwatcher work? Or just add the words to their database table and then parse them out to a lexicon file, then deploy it to the client machine every time and update occurs.