JPA with JTA how to persist many entites in one transaction - jpa-2.0

I have a list of objects. They are JPA "Location" entities.
List<Location> locations;
I have a stateless EJB which loops thru the list and persists each one.
public void createLocations() {
List<Locations> locations = getListOfJPAManagedLocationEntities(); // I'm leaving out the details of this because it has nothing to do with the issue
for(Location location : locations) {
em.persist(location);
}
}
The code works fine. I do not have any problems.
However, the issue is: I want this to be an all-or-none transaction. Currently, each time thru the for loop, the persist() method will insert a new row into the database. Suppose I have 100 location objects and the 54th object has something wrong with it and an exception is thrown. There will be 53 records inserted into the database. What I want is: all of them must succeed before any of them succeed.
I'm using the latest & greatest version of Java EE6, EJB 3.x., and JPA 2. My persistence.xml uses JTA.
<persistence-unit name="myPersistenceUnit" transaction-type="JTA">
And I like having JTA.
I do not want to stop using JTA.
90% of the time JTA does exactly what I want it to do. But in this case, I doesn't seem to.
My understanding of JTA must be inaccurate because I always thought the beginning and end of the EJB method marked the boundaries of the JTA transaction (assume only one method is in-play as I've shown above). By my logic, the transaction would not end until the for-loop is done and the method returns, and then at that point the records are persisted.
I'm using the JTDS driver for SqlServer 2008. Perhaps the database doesn't want to insert a record without immediately committing it. The entity id is defined like this:
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
I've checked the spec., and it is not proper to call the various "UserTransaction" or "getTransaction()" methods in a JTA environment.
So what can I do?
Thanks.

If you use JTA and container managed transactions the default behavior for an session EJB method call is to run in a transaction (is like annotating it with #TransactionAttribute(TransactionAttributeType.REQUIRED). That means that your code already runs in a transaction and will do what you expect: if an exception occurs at row 54 all previous inserted rows will be rolled-back. You can go ahead and test it by throwing yourself an exception at some point in the loop. Note that if you throw a checked exception declared by your method you can specify what the container should do when that exception occurs. You need to annotate the exception class with #ApplicationException (rollback=true).

if there was a duplicate entry while looping then it will continue without problems and when compiler reaches this line em.flush(); after the loop then it will throw an exception and rollback the transaction.

I'm using JBoss. Set your datasource in your standalone.xml or domain.xml to have
<datasource jta="true" ...>
Seems obvious, but I obviously set it wrong a long time ago and forgot about it.

Related

Google Cloud Datastore - get after insert in one request

I am trying to retrieve an entity immediately after it was saved. When debugging, I insert the entity, and check entities in google cloud console, I see it was created.
Key key = datastore.put(fullEntity)
After that, I continue with getting the entity with
datastore.get(key)
, but nothing is returned. How do I retrieve the saved entity within one request?
I've read this question Missing entities after insertion in Google Cloud DataStore
but I am only saving 1 entity, not tens of thousands like in that question
I am using Java 11 and google datastore (com.google.cloud.datastore. package)*
edit: added code how entity was created
public Key create.... {
// creating the entity inside a method
Transaction txn = this.datastore.newTransaction();
this.datastore = DatastoreOptions.getDefaultInstance().getService();
Builder<IncompleteKey> builder = newBuilder(entitykey);
setLongOrNull(builder, "price", purchase.getPrice());
setTimestampOrNull(builder, "validFrom", of(purchase.getValidFrom()));
setStringOrNull(builder, "invoiceNumber", purchase.getInvoiceNumber());
setBooleanOrNull(builder, "paidByCard", purchase.getPaidByCard());
newPurchase = entityToObject(this.datastore.put(builder.build()));
if (newPurchase != null && purchase.getItems() != null && purchase.getItems().size() > 0) {
for (Item item : purchase.getItems()) {
newPurchase.getItems().add(this.itemDao.save(item, newPurchase));
}
}
txn.commit();
return newPurchase.getKey();
}
after that, I am trying to retrieve the created entity
Key key = create(...);
Entity e = datastore.get(key)
I believe that there are a few issues with your code, but since we are unable to see the logic behind many of your methods, here comes my guess.
First of all, as you can see on the documentation, it's possible to save and retrieve an entity on the same code, so this is not a problem.
It seems like you are using a transaction which is right to perform multiple operations in a single action, but it doesn't seem like you are using it properly. This is because you only instantiate it and close it, but you don't put any operation on it. Furthermore, you are using this.datastore to save to the database, which completely neglects the transaction.
So you either save the object when it has all of its items already added or you create a transaction to save all the entities at once.
And I believe you should use the entityKey in order to fetch the added purchase afterwards, but don't mix it.
Also you are creating the Transaction object from this.datastore before instantiating the latter, but I assume this is a copy-paste error.
Since you're creating a transaction for this operation, the entity put should happen inside the transaction:
txn.put(builder.builder());
Also, the operations inside the loop where you add the purchase.getItems() to the newPurchase object should also be done in the context of the same transaction.
Let me know if this resolves the issue.
Cheers!

Repeated use of OfflineTileProvider freezes app

I have an activity with a fragment which contains a MapView. The MapView uses an OfflineTileProvider, with tiles downloaded using the CacheManager. When entering and exiting the activity repeatedly, the app will sometimes freeze. Sometimes it happens after revisiting the activity once or twice, and sometimes it takes many more times(20 or more) before it freezes.
Every time I enter the activity there are two kinds of exceptions being thrown, the first one being:
Error loading tile
java.lang.IllegalStateException: Cannot perform this operation because the connection pool has been closed.
at android.database.sqlite.SQLiteConnectionPool.throwIfClosedLocked(SQLiteConnectionPool.java:962)
at android.database.sqlite.SQLiteConnectionPool.waitForConnection(SQLiteConnectionPool.java:677)
at android.database.sqlite.SQLiteConnectionPool.acquireConnection(SQLiteConnectionPool.java:348)
at android.database.sqlite.SQLiteSession.acquireConnection(SQLiteSession.java:894)
at android.database.sqlite.SQLiteSession.executeForCursorWindow(SQLiteSession.java:834)
at android.database.sqlite.SQLiteQuery.fillWindow(SQLiteQuery.java:62)
at android.database.sqlite.SQLiteCursor.fillWindow(SQLiteCursor.java:145)
at android.database.sqlite.SQLiteCursor.getCount(SQLiteCursor.java:134)
at org.osmdroid.tileprovider.modules.MapTileSqlCacheProvider$TileLoader.loadTile(MapTileSqlCacheProvider.java:209)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:297)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
at java.lang.Thread.run(Thread.java:760)
and the second one:
Unable to store cached tile from Mapnik /0/0/0 db is null
java.lang.NullPointerException: Attempt to invoke virtual method 'int android.database.sqlite.SQLiteDatabase.delete(java.lang.String, java.lang.String, java.lang.String[])' on a null object reference
at org.osmdroid.tileprovider.modules.SqlTileWriter.saveFile(SqlTileWriter.java:175)
at org.osmdroid.tileprovider.modules.MapTileDownloader$TileLoader.loadTile(MapTileDownloader.java:251)
at org.osmdroid.tileprovider.modules.MapTileModuleProviderBase$TileLoader.run(MapTileModuleProviderBase.java:297)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
at java.lang.Thread.run(Thread.java:760)
Have anyone else had similar issues, or know what to do?
That's a fun one. Looks like a lifecycle/concurrency problem. It's possible that the old instances of the map fragment are still alive. You may be able to work around the issue by forcing android to execute now:
From osmdroid test/instrumentation package
fm.beginTransaction().replace(org.osmdroid.R.id.samples_container, basefrag, ExtraSamplesActivity.SAMPLES_FRAGMENT_TAG)
.addToBackStack(ExtraSamplesActivity.SAMPLES_FRAGMENT_TAG).commit();
fm.executePendingTransactions();
Basically, it forces the fragment transaction to happen immediately which should force the android lifecycle stuff to happen.
The other possibility is that you have a static reference to the map or something that's holding on to the map reference or your offline tile provider instance. Are any of those possible?
Edit: I just tested map in a fragment by loading and unloading the map a bunch of times and could not reproduce it. Are you programmatically creating the map or are you using xml layout?

How avoid closing EntityManager when OptimisticLockException occurs?

My problem - process try change entity that already changed and have newest version id. When i do flush() in my code in UnitOfWork's commit() rising OptimisticLockException and catching in same place by catch-all block. And in this catch doctrine closing EntityManager.
If i want skip this entity and continue with another from ArrayCollection, i should not use flush()?
Try recreate EntityManager:
}catch (OptimisticLockException $e){
$this->em = $this->container->get('doctrine')->getManager();
echo "\n||OptimisticLockException.";
continue;
}
And still get
[Doctrine\ORM\ORMException]
The EntityManager is closed.
Strange.
If i do
$this->em->lock($entity, LockMode::OPTIMISTIC, $entity->getVersion());
and then do flush() i get OptimisticLockException and closed entity manager.
if i do
$this->getContainer()->get('doctrine')->resetManager();
$em = $doctrine->getManager();
Old data unregistered with this entity manager and i even can't write logs in database, i get error:
[Symfony\Component\Debug\Exception\ContextErrorException]
Notice: Undefined index: 00000000514cef3c000000002ff4781e
You should check entity version before you try to flush it to avoid exception. In other words you should not call flush() method if the lock fails.
You can use EntityManager#lock() method for checking whether you can flush entity or not.
/** #var EntityManager $em */
$entity = $em->getRepository('Post')->find($_REQUEST['id']);
// Get expected version (easiest way is to have the version number as a hidden form field)
$expectedVersion = $_REQUEST['version'];
// Update your entity
$entity->setText($_REQUEST['text']);
try {
//assert you edit right version
$em->lock($entity, LockMode::OPTIMISTIC, $expectedVersion);
//if $em->lock() fails flush() is not called and EntityManager is not closed
$em->flush();
} catch (OptimisticLockException $e) {
echo "Sorry, but someone else has already changed this entity. Please apply the changes again!";
}
Check the example in Doctrine docs optimistic locking
Unfortunately, nearly 4 years later, Doctrine is still unable to recover from an optimistic lock properly.
Using the lock function as suggested in the doc doesn't work if the db was changed by another server or php worker thread. The lock function only makes sure the version number wasn't changed by the current php script since the entity was loaded into memory. It doesn't read the db to make sure the version number is still the expected one.
And even if it did read the db, there is still the potential for a race condition between the time the lock function checks the current version in the db and the flush is performed.
Consider this scenario:
server A reads the entity,
server B reads the same entity,
server B updates the db,
server A updates the db <== optimistic lock exception
The exception is triggered when flush is called and there is nothing that can be done to prevent it.
Even a pessimistic lock won't help unless you can afford to loose performance and actually lock your db for a (relatively) long time.
Doctrine's solution (update... where version = :expected_version) is good in theory. But, sadly, Doctrine was designed to become unusable once an exception is triggered. Any exception. Every entity is detached. Even if the optimistic lock can be easily solved by re-reading the entity and applying the change again, Doctrine makes it very hard to do so.
As others have said, sometimes EntityManager#lock() is not useful. In my case, the Entity version may change during the same request.
If EntityManager closes after flush(), I proceed like this:
if (!$entityManager->isOpen()) {
$entityManager = EntityManager::create(
$entityManager->getConnection(),
$entityManager->getConfiguration(),
$entityManager->getEventManager()
);
// ServiceManager shoud be aware of this change
// this is for Zend ServiceManager your shoud adapt this part to your usecase
$serviceManager = $application->getServiceManager();
$serviceManager->setAllowOverride(true);
$serviceManager->setService(EntityManager::class, $entityManager);
$serviceManager->setAllowOverride(false);
// Then you should manually reload every Entity you need (or repeat the whole set of actions)
}

SFDC Apex Code: Access class level static variable from "Future" method

I need to do a callout to webservice from my ApexController class. To do this, I have an asycn method with attribute #future (callout=true). The webservice call needs to refeence an object that gets populated in save call from VF page.
Since, static (future) calls does not all objects to be passed in as method argument, I was planning to add the data in a static Map and access that in my static method to do a webservice call out. However, the static Map object is getting re-initalized and is null in the static method.
I will really appreciate if anyone can give me some pointeres on how to address this issue.
Thanks!
Here is the code snipped:
private static Map<String, WidgetModels.LeadInformation> leadsMap;
....
......
public PageReference save() {
if(leadsMap == null){
leadsMap = new Map<String, WidgetModels.LeadInformation>();
}
leadsMap.put(guid,widgetLead);
}
//make async call to Widegt Webservice
saveWidgetCallInformation(guid)
//async call to widge webserivce
#future (callout=true)
public static void saveWidgetCallInformation(String guid) {
WidgetModels.LeadInformation cachedLeadInfo =
(WidgetModels.LeadInformation)leadsMap.get(guid);
.....
//call websevice
}
#future is totally separate execution context. It won't have access to any history of how it was called (meaning all static variables are reset, you start with fresh governor limits etc. Like a new action initiated by the user).
The only thing it will "know" is the method parameters that were passed to it. And you can't pass whole objects, you need to pass primitives (Integer, String, DateTime etc) or collections of primitives (List, Set, Map).
If you can access all the info you need from the database - just pass a List<Id> for example and query it.
If you can't - you can cheat by serializing your objects and passing them as List<String>. Check the documentation around JSON class or these 2 handy posts:
https://developer.salesforce.com/blogs/developer-relations/2013/06/passing-objects-to-future-annotated-methods.html
https://gist.github.com/kevinohara80/1790817
Side note - can you rethink your flow? If the starting point is Visualforce you can skip the #future step. Do the callout first and then the DML (if needed). That way the usual "you have uncommitted work pending" error won't be triggered. This thing is there not only to annoy developers ;) It's there to make you rethink your design. You're asking the application to have open transaction & lock on the table(s) for up to 2 minutes. And you're giving yourself extra work - will you rollback your changes correctly when the insert went OK but callout failed?
By reversing the order of operations (callout first, then the DML) you're making it simpler - there was no save attempt to DB so there's nothing to roll back if the save fails.

JPA - only first commit failed, but should failed all

please can somebody help me to explain the following (for me) very strange JPA behaviour. I intentionally change primary key of entity which is prohibed in JPA.
So first commit correctly throws "Exception Description: The attribute [date] of class [some.package.Holiday] is mapped to a primary key column in the database. Updates are not allowed.".
But second (third, fourth, ...) succeed...! How is this possible?!
Holiday h1 = EM.find(Holiday.class, new GregorianCalendar(2011, 0, 3).getTime());
try {
EM.getTransaction().begin();
h1.setDate(new GregorianCalendar(2011, 0, 4).getTime());
EM.getTransaction().commit();
System.out.println("First commit succeed");
} catch (Exception e) {
System.out.println("First commit failed");
}
try {
EM.getTransaction().begin();
EM.getTransaction().commit();
System.out.println("Second commit succeed");
} catch (Exception e) {
System.out.println("Second commit failed");
}
It will printout:
First commit failed
Second commit succeed
OMG, how is this possible?!
(Using EclipseLink 2.2.0.v20110202-r8913 with MySQL.)
The failure of the commit operation for the first transaction has no bearing on the second transaction. This is due to the fact that when the first commit fails, the EntityTransaction is no longer in the active state. When you issue the second em.getTransaction().begin invocation, a new transaction is initiated that does not have any knowledge of the first.
It is important to note that although your code may use the same EntityTransaction reference in both cases, it is not necessary that this class actually represent the transaction. In the case of EclipseLink, the EntityTransaction reference actually wraps an EntityTransactionWrapper instance that further uses a RepeatableUnitOfWork, the latter two classes being provided by EclipseLink implementation and not JPA. It is the RepeatableWriteUnitOfWork instance that actually tracks the collection of changes made to entities that will be merged into the shared cache (and the database). When the first transaction fails, the underlying UnitOfWork is invalidated, and new UnitOfWork is established when you start the second EntityTransaction.
The same will apply to most other JPA providers as the EntityTransaction class is not a concrete final class. Instead, it is an interface that is typically implemented by another class in the JPA provider, and which may likewise wrap a transaction thereby requiring clients to use the EntityTransaction reference instead of directly working with the underlying transaction (which may be a JTA transaction or a resource-local transaction).
Additionally, you ought to remember that:
EntityTransaction.begin() should be invoked only once. Invoking it a second time will result in an IllegalStateException exception being thrown as it cannot be invoked when a transaction is active. So, the fact that you are able to invoke it the second time, implies that the first transaction is no longer active.
If you require the changes performed in the context of the first transaction to be made available to the second, you must merge the entities back into the shared context in the second transaction, after they've been detached by the first. While, this may sound ridiculous, you ought to remember that detached entities can be modified by clients (read, end-users) before they are merged back, so the changes made by the end users may be retained, while mistakes (like the modification of the primary keys) may be corrected in the interim.