How to retry update after OptimisticLockException in the same TX - optimistic-locking

In my program I need to be able to retry row update after the row was updated by an external transaction. Hibernate/JPA throws OptimisticLockException, which I catch.
Upon retry I try to re-read the row from DB through JPQL/HQL but the select statement generates the same OptimisticLockException.
Is there any way to re-read the latest version of the row and update it in THE SAME transaction?

Session.refresh(object) or Session.lock() would probably be appropriate.

Related

Corda 4.7 QueryCriteria, LinearStateQueryCriteria

I have below query criteria to fetch State based on linearId. I am trying below code
//query criteria
QueryCriteria queryCriteria = new LinearStateQueryCriteria(
null,
ImmutableList.of(UUID.fromString(linearId))
);
However, I am getting a compile time error asking to change QueryCriteria to QueryCriteria.LinearStateQueryCriteria. If I do that, then vaultService.queryBy() does not accept the queryCriteria and throws a compile time error.
As per documentation, API: Vault Query - Custom queries in Java it should have worked. Can someone help ?
glad to see you figured it out, even though I'm not sure why IJ would flag the wrong error. In any event, here's my go-to code sample of making a query on a linear state.
link attached:
https://github.com/corda/samples-java/blob/master/Advanced/obligation-cordapp/workflows/src/main/java/net/corda/samples/obligation/flows/IOUSettleFlow.java#L57-L62

MFC CRecordset: Cannot flush newly added recordset to data source then continue updating it

I am working on refactoring some data-handling code in a MFC application that uses the CRecordset API (actually, a derived class from it, but the failures are coming out of CRecordset itself AFAICT) to talk to an ODBC data source backed by an Oracle database, but have encountered a sequence of operations that the CRecordset API, at least as of the version shipped with Visual Studio 2012 (which I know is old, but am hog-tied to for the time being) seems to be unable to perform.
In particular, in the following sequence of events, intended to flush changes to the record to the DB so that other queries performed during this sequence can see them:
CRecordset aRecordset(myDatabase);
aRecordset.Open(CRecordset::snapshot, "<some query that yields no records>"); // using CRecordset::dynaset doesn't change things
aRecordset.AddNew();
// set some values on aRecordset...
aRecordset.Update();
aRecordset.Requery(); // removing the Requery calls changes the failure mode
aRecordset.Edit(); // This call fails if the Requery is present
// perform query that needs to pick up on the values set on aRecordset above
// set some more values on aRecordset...
aRecordset.Update(); // This call fails if the Requery is not present
aRecordset.Requery();
aRecordset.Edit();
// perform query that needs to pick up on the values set on aRecordset above
// set yet more values on aRecordset...
aRecordset.Update();
aRecordset.Close();
I get two different failure modes, depending on whether the Requery calls are present or not.
With the Requery calls present, I get the following error from the first call to Edit in the sequence:
Error: Edit attempt failed - not on a record.
Operation failed, no current record.
while with them absent, I get a different error, this time from the second call to Update in the sequence, as follows:
Error: failure updating record.
Invalid cursor state
State:24000,Native:0,Origin:[Microsoft][ODBC Driver Manager]
Am I completely off my rocker in expecting CRecordset to be capable of flushing a newly added record to the database then going back to update the row further? Or is this a simple case of API operator error, and if so, what am I missing here? Is my Visual Studio/MFC too old for this fancy footwork?
Furthermore, it turns out that doing a .Requery() is not an option due to a requirement that I be able to .Open() a recordset with multiple rows, then do an .Edit()/.Update()/.Edit()/.Update() sequence on each row. Using the .Requery() in this case causes the cursor to be reset to the beginning with no good way to restore the cursor position, as the Oracle ODBC drivers do not support bookmarking across a requery.

delete operation not successful in Axapta 2009

I have written a simple one record delete operation job in production as requested by user, in an AX instance while the other instance was stuck and open. However the record was not deleted.
try
{
ttsbegin;
select fotupdate tableBuffer where tableBuffer.recid == 5457735:
tableBuffer.delete();
ttscommit;
}
catch (exception::error)
{
info("Delete operation cancelled.");
}
tableBuffer's delete()function was overridden with code after super() to store the deleted record in another table.
I have done the same operation earlier successfully but no where with a scenario like one today(executed in one instance while the other instance was stuck).
Please suggest the possible reason as I find the record still persist both in sql server and AX.
Thank you.
If you're trying to prevent this from happening you can use pessimistic locking, where you obtain an update lock.
select pessimisticLock custTable
where custTable.AccountNum > '1000'
See these links for more info:
http://dev.goshoom.net/en/2011/10/pessimistic-locking/
https://blogs.msdn.microsoft.com/emeadaxsupport/2009/07/08/about-locking-and-blocking-in-dynamics-ax-and-how-to-prevent-it/
https://msdn.microsoft.com/en-us/library/bb190073.aspx

C++ OTL doesn't see external database changes

I have a C++ program that is using OTLv4 to connecto to a database. Everything is working fine. I can both insert data into the database and read data out of the database.
However, if I change data in the database from another program, then this isn't reflected in my C++ program. If I for example remove an entry with MySQL workbench, the C++ program will still see the entry. The data I see is the data as it appeared when the program first logged in to the database.
If I log off and log on each time I do a query then I will get the current value, but that does not seem very efficient. Similarly if I run a query from the C++ program that will modifiy the database then the program will start seeing the current values up until that point.
To me this feels like some sort of over-aggressive caching, but I don't know how that works in OTL, haven't seen any mention of caches other than possibly the stream pooling which I know nothing about.
I'm not doing anything fancy. OTL is compiled with these parameters:
#define OTL_ODBC // Compile OTL 4.0/ODBC
#define OTL_UNICODE // Compile OTL with Unicode
#define OTL_UNICODE_EXCEPTION_AND_RLOGON
#define OTL_UNICODE_STRING_TYPE std::wstring
// The following #define is required with MyODBC 3.51.11 and higher
#define OTL_ODBC_SELECT_STM_EXECUTE_BEFORE_DESCRIBE
The code looks something like this:
otl_connect::otl_initialize(1); // Multithreading
otl_connect database;
database.rlogon(...);
// Make queries with otl_stream and direct_exec
otl_stream stream(50, "select * from ...", database);
database.direct_exec("insert ... into ...", otl_exception::disabled);
database.logoff();
Is there something I have missed, some configuration I need to do? Turn off some sort of cache? Maybe i really do need to login and logoff each time?
I found out what is wrong:
Q. OTL: When I insert a new row into a table in MySQL, I can't SELECT it, what's going on?
If you're using a prepared SELECT statement in an otl_stream, and keep executing / reusing the statement to get new rows, you need to commit (call otl_connect::commit()) after the fetch sequence is exhausted each time. The commit call will let your MySQL Server know that your current read only transaction is finished, and the server can start a new transaction, which will make newly inserted rows to be visible to your SELECT statement. In other words, you need to commit your SELECT statements in order to able to see new rows.
From http://otl.sourceforge.net/otl3_faq.htm
So the problem was that whenever I make a SELECT statement I have to call otl_connect::commit(); or MySQL won't understand that the statement is finished.

JPA with JTA how to persist many entites in one transaction

I have a list of objects. They are JPA "Location" entities.
List<Location> locations;
I have a stateless EJB which loops thru the list and persists each one.
public void createLocations() {
List<Locations> locations = getListOfJPAManagedLocationEntities(); // I'm leaving out the details of this because it has nothing to do with the issue
for(Location location : locations) {
em.persist(location);
}
}
The code works fine. I do not have any problems.
However, the issue is: I want this to be an all-or-none transaction. Currently, each time thru the for loop, the persist() method will insert a new row into the database. Suppose I have 100 location objects and the 54th object has something wrong with it and an exception is thrown. There will be 53 records inserted into the database. What I want is: all of them must succeed before any of them succeed.
I'm using the latest & greatest version of Java EE6, EJB 3.x., and JPA 2. My persistence.xml uses JTA.
<persistence-unit name="myPersistenceUnit" transaction-type="JTA">
And I like having JTA.
I do not want to stop using JTA.
90% of the time JTA does exactly what I want it to do. But in this case, I doesn't seem to.
My understanding of JTA must be inaccurate because I always thought the beginning and end of the EJB method marked the boundaries of the JTA transaction (assume only one method is in-play as I've shown above). By my logic, the transaction would not end until the for-loop is done and the method returns, and then at that point the records are persisted.
I'm using the JTDS driver for SqlServer 2008. Perhaps the database doesn't want to insert a record without immediately committing it. The entity id is defined like this:
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
I've checked the spec., and it is not proper to call the various "UserTransaction" or "getTransaction()" methods in a JTA environment.
So what can I do?
Thanks.
If you use JTA and container managed transactions the default behavior for an session EJB method call is to run in a transaction (is like annotating it with #TransactionAttribute(TransactionAttributeType.REQUIRED). That means that your code already runs in a transaction and will do what you expect: if an exception occurs at row 54 all previous inserted rows will be rolled-back. You can go ahead and test it by throwing yourself an exception at some point in the loop. Note that if you throw a checked exception declared by your method you can specify what the container should do when that exception occurs. You need to annotate the exception class with #ApplicationException (rollback=true).
if there was a duplicate entry while looping then it will continue without problems and when compiler reaches this line em.flush(); after the loop then it will throw an exception and rollback the transaction.
I'm using JBoss. Set your datasource in your standalone.xml or domain.xml to have
<datasource jta="true" ...>
Seems obvious, but I obviously set it wrong a long time ago and forgot about it.