JPA - only first commit failed, but should failed all - jpa-2.0

please can somebody help me to explain the following (for me) very strange JPA behaviour. I intentionally change primary key of entity which is prohibed in JPA.
So first commit correctly throws "Exception Description: The attribute [date] of class [some.package.Holiday] is mapped to a primary key column in the database. Updates are not allowed.".
But second (third, fourth, ...) succeed...! How is this possible?!
Holiday h1 = EM.find(Holiday.class, new GregorianCalendar(2011, 0, 3).getTime());
try {
EM.getTransaction().begin();
h1.setDate(new GregorianCalendar(2011, 0, 4).getTime());
EM.getTransaction().commit();
System.out.println("First commit succeed");
} catch (Exception e) {
System.out.println("First commit failed");
}
try {
EM.getTransaction().begin();
EM.getTransaction().commit();
System.out.println("Second commit succeed");
} catch (Exception e) {
System.out.println("Second commit failed");
}
It will printout:
First commit failed
Second commit succeed
OMG, how is this possible?!
(Using EclipseLink 2.2.0.v20110202-r8913 with MySQL.)

The failure of the commit operation for the first transaction has no bearing on the second transaction. This is due to the fact that when the first commit fails, the EntityTransaction is no longer in the active state. When you issue the second em.getTransaction().begin invocation, a new transaction is initiated that does not have any knowledge of the first.
It is important to note that although your code may use the same EntityTransaction reference in both cases, it is not necessary that this class actually represent the transaction. In the case of EclipseLink, the EntityTransaction reference actually wraps an EntityTransactionWrapper instance that further uses a RepeatableUnitOfWork, the latter two classes being provided by EclipseLink implementation and not JPA. It is the RepeatableWriteUnitOfWork instance that actually tracks the collection of changes made to entities that will be merged into the shared cache (and the database). When the first transaction fails, the underlying UnitOfWork is invalidated, and new UnitOfWork is established when you start the second EntityTransaction.
The same will apply to most other JPA providers as the EntityTransaction class is not a concrete final class. Instead, it is an interface that is typically implemented by another class in the JPA provider, and which may likewise wrap a transaction thereby requiring clients to use the EntityTransaction reference instead of directly working with the underlying transaction (which may be a JTA transaction or a resource-local transaction).
Additionally, you ought to remember that:
EntityTransaction.begin() should be invoked only once. Invoking it a second time will result in an IllegalStateException exception being thrown as it cannot be invoked when a transaction is active. So, the fact that you are able to invoke it the second time, implies that the first transaction is no longer active.
If you require the changes performed in the context of the first transaction to be made available to the second, you must merge the entities back into the shared context in the second transaction, after they've been detached by the first. While, this may sound ridiculous, you ought to remember that detached entities can be modified by clients (read, end-users) before they are merged back, so the changes made by the end users may be retained, while mistakes (like the modification of the primary keys) may be corrected in the interim.

Related

Google Cloud Datastore - get after insert in one request

I am trying to retrieve an entity immediately after it was saved. When debugging, I insert the entity, and check entities in google cloud console, I see it was created.
Key key = datastore.put(fullEntity)
After that, I continue with getting the entity with
datastore.get(key)
, but nothing is returned. How do I retrieve the saved entity within one request?
I've read this question Missing entities after insertion in Google Cloud DataStore
but I am only saving 1 entity, not tens of thousands like in that question
I am using Java 11 and google datastore (com.google.cloud.datastore. package)*
edit: added code how entity was created
public Key create.... {
// creating the entity inside a method
Transaction txn = this.datastore.newTransaction();
this.datastore = DatastoreOptions.getDefaultInstance().getService();
Builder<IncompleteKey> builder = newBuilder(entitykey);
setLongOrNull(builder, "price", purchase.getPrice());
setTimestampOrNull(builder, "validFrom", of(purchase.getValidFrom()));
setStringOrNull(builder, "invoiceNumber", purchase.getInvoiceNumber());
setBooleanOrNull(builder, "paidByCard", purchase.getPaidByCard());
newPurchase = entityToObject(this.datastore.put(builder.build()));
if (newPurchase != null && purchase.getItems() != null && purchase.getItems().size() > 0) {
for (Item item : purchase.getItems()) {
newPurchase.getItems().add(this.itemDao.save(item, newPurchase));
}
}
txn.commit();
return newPurchase.getKey();
}
after that, I am trying to retrieve the created entity
Key key = create(...);
Entity e = datastore.get(key)
I believe that there are a few issues with your code, but since we are unable to see the logic behind many of your methods, here comes my guess.
First of all, as you can see on the documentation, it's possible to save and retrieve an entity on the same code, so this is not a problem.
It seems like you are using a transaction which is right to perform multiple operations in a single action, but it doesn't seem like you are using it properly. This is because you only instantiate it and close it, but you don't put any operation on it. Furthermore, you are using this.datastore to save to the database, which completely neglects the transaction.
So you either save the object when it has all of its items already added or you create a transaction to save all the entities at once.
And I believe you should use the entityKey in order to fetch the added purchase afterwards, but don't mix it.
Also you are creating the Transaction object from this.datastore before instantiating the latter, but I assume this is a copy-paste error.
Since you're creating a transaction for this operation, the entity put should happen inside the transaction:
txn.put(builder.builder());
Also, the operations inside the loop where you add the purchase.getItems() to the newPurchase object should also be done in the context of the same transaction.
Let me know if this resolves the issue.
Cheers!

Set autocommit off in PostgreSQL with SOCI C++

This is an answer rather than a question which I need to state in SO anyway. I was struggle with this question ("how to turn off autocommit when using soci library with PostgreSQL databases") for a long time and came up with several solutions.
In Oracle, by default the auto commit option is turned off and we have to call soci::session::commit explicitly to commit the transactions we have made but in PostgreSQL this is other way around and it will commit as soon as we execute a sql statement (correct me, if I'm wrong). This will introduce problems when we write applications database independently. The soci library provide soci::transaction in order to address this.
So, when we initialize a soci::transaction by providing the soci::session to that, it will hold the transaction we have made without commiting to the database. At the end when we call soci::transaction::commit it will commit the changes to the database.
soci::session sql(CONNECTION_STRING);
soci::transaction tr(sql);
try {
sql << "insert into soci_test(id, name) values(7, \'John\')";
tr.commit();
}
catch (std::exception& e) {
tr.rollback();
}
But, performing commit or rollback will end the transaction tr and we need to initialize another soci::transaction in order to hold future transactions (to create an active in progress transaction) we are about to make. Here are more fun facts about soci::transaction.
You can have only one soci::transaction instance per soci::session. The second one will replace the first one, if you initialize another.
You can not perform more than a single commit or rollback using a soci::transaction. You will receive an exception, at the second time you do commit or rollback.
You can initialize a transaction, then use session::commit or session::rollback. It will give the same result as transaction::commit or transaction::rollback. But the transaction will end as soon as you perform single commit or rollback as usual.
It doesn't matter the visibility of the soci::transaction object to your scope (where you execute the sql and call commit or rollback) in order to hold the db transactions you made until explicitly commit or rollback. In other words, if there is an active transaction in progress for a session, db transactions will hold until we explicitly commit or rollback.
But, if the lifetime of the transaction instance which created for the session was end, we cannot expect the db transactions will be halt.
If you every suffer with "WARNING: there is no transaction in progress", you have to perform commit or rollback only using soci::transaction::commit or soci::transaction::rollback.
Now I will post the solution which I came up with, in order to enable the explicit commit or rollback with any database backend.
This is the solution I came up with.
namespace mysociutils
{
class session : public soci::session
{
public:
void open(std::string const & connectString)
{
soci::session::open(connectString);
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void commit()
{
tr->commit();
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void rollback()
{
tr->rollback();
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void ~session()
{
tr->rollback();
}
private:
std::unique_ptr<soci::transaction> tr;
};
}
When ever commit or rollback is performed, initialize a new soci::transaction. Now you can replace your soci::session sql with mysociutils::session sql and enjoy SET AUTOCOMMIT OFF.

How avoid closing EntityManager when OptimisticLockException occurs?

My problem - process try change entity that already changed and have newest version id. When i do flush() in my code in UnitOfWork's commit() rising OptimisticLockException and catching in same place by catch-all block. And in this catch doctrine closing EntityManager.
If i want skip this entity and continue with another from ArrayCollection, i should not use flush()?
Try recreate EntityManager:
}catch (OptimisticLockException $e){
$this->em = $this->container->get('doctrine')->getManager();
echo "\n||OptimisticLockException.";
continue;
}
And still get
[Doctrine\ORM\ORMException]
The EntityManager is closed.
Strange.
If i do
$this->em->lock($entity, LockMode::OPTIMISTIC, $entity->getVersion());
and then do flush() i get OptimisticLockException and closed entity manager.
if i do
$this->getContainer()->get('doctrine')->resetManager();
$em = $doctrine->getManager();
Old data unregistered with this entity manager and i even can't write logs in database, i get error:
[Symfony\Component\Debug\Exception\ContextErrorException]
Notice: Undefined index: 00000000514cef3c000000002ff4781e
You should check entity version before you try to flush it to avoid exception. In other words you should not call flush() method if the lock fails.
You can use EntityManager#lock() method for checking whether you can flush entity or not.
/** #var EntityManager $em */
$entity = $em->getRepository('Post')->find($_REQUEST['id']);
// Get expected version (easiest way is to have the version number as a hidden form field)
$expectedVersion = $_REQUEST['version'];
// Update your entity
$entity->setText($_REQUEST['text']);
try {
//assert you edit right version
$em->lock($entity, LockMode::OPTIMISTIC, $expectedVersion);
//if $em->lock() fails flush() is not called and EntityManager is not closed
$em->flush();
} catch (OptimisticLockException $e) {
echo "Sorry, but someone else has already changed this entity. Please apply the changes again!";
}
Check the example in Doctrine docs optimistic locking
Unfortunately, nearly 4 years later, Doctrine is still unable to recover from an optimistic lock properly.
Using the lock function as suggested in the doc doesn't work if the db was changed by another server or php worker thread. The lock function only makes sure the version number wasn't changed by the current php script since the entity was loaded into memory. It doesn't read the db to make sure the version number is still the expected one.
And even if it did read the db, there is still the potential for a race condition between the time the lock function checks the current version in the db and the flush is performed.
Consider this scenario:
server A reads the entity,
server B reads the same entity,
server B updates the db,
server A updates the db <== optimistic lock exception
The exception is triggered when flush is called and there is nothing that can be done to prevent it.
Even a pessimistic lock won't help unless you can afford to loose performance and actually lock your db for a (relatively) long time.
Doctrine's solution (update... where version = :expected_version) is good in theory. But, sadly, Doctrine was designed to become unusable once an exception is triggered. Any exception. Every entity is detached. Even if the optimistic lock can be easily solved by re-reading the entity and applying the change again, Doctrine makes it very hard to do so.
As others have said, sometimes EntityManager#lock() is not useful. In my case, the Entity version may change during the same request.
If EntityManager closes after flush(), I proceed like this:
if (!$entityManager->isOpen()) {
$entityManager = EntityManager::create(
$entityManager->getConnection(),
$entityManager->getConfiguration(),
$entityManager->getEventManager()
);
// ServiceManager shoud be aware of this change
// this is for Zend ServiceManager your shoud adapt this part to your usecase
$serviceManager = $application->getServiceManager();
$serviceManager->setAllowOverride(true);
$serviceManager->setService(EntityManager::class, $entityManager);
$serviceManager->setAllowOverride(false);
// Then you should manually reload every Entity you need (or repeat the whole set of actions)
}

JPA with JTA how to persist many entites in one transaction

I have a list of objects. They are JPA "Location" entities.
List<Location> locations;
I have a stateless EJB which loops thru the list and persists each one.
public void createLocations() {
List<Locations> locations = getListOfJPAManagedLocationEntities(); // I'm leaving out the details of this because it has nothing to do with the issue
for(Location location : locations) {
em.persist(location);
}
}
The code works fine. I do not have any problems.
However, the issue is: I want this to be an all-or-none transaction. Currently, each time thru the for loop, the persist() method will insert a new row into the database. Suppose I have 100 location objects and the 54th object has something wrong with it and an exception is thrown. There will be 53 records inserted into the database. What I want is: all of them must succeed before any of them succeed.
I'm using the latest & greatest version of Java EE6, EJB 3.x., and JPA 2. My persistence.xml uses JTA.
<persistence-unit name="myPersistenceUnit" transaction-type="JTA">
And I like having JTA.
I do not want to stop using JTA.
90% of the time JTA does exactly what I want it to do. But in this case, I doesn't seem to.
My understanding of JTA must be inaccurate because I always thought the beginning and end of the EJB method marked the boundaries of the JTA transaction (assume only one method is in-play as I've shown above). By my logic, the transaction would not end until the for-loop is done and the method returns, and then at that point the records are persisted.
I'm using the JTDS driver for SqlServer 2008. Perhaps the database doesn't want to insert a record without immediately committing it. The entity id is defined like this:
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
I've checked the spec., and it is not proper to call the various "UserTransaction" or "getTransaction()" methods in a JTA environment.
So what can I do?
Thanks.
If you use JTA and container managed transactions the default behavior for an session EJB method call is to run in a transaction (is like annotating it with #TransactionAttribute(TransactionAttributeType.REQUIRED). That means that your code already runs in a transaction and will do what you expect: if an exception occurs at row 54 all previous inserted rows will be rolled-back. You can go ahead and test it by throwing yourself an exception at some point in the loop. Note that if you throw a checked exception declared by your method you can specify what the container should do when that exception occurs. You need to annotate the exception class with #ApplicationException (rollback=true).
if there was a duplicate entry while looping then it will continue without problems and when compiler reaches this line em.flush(); after the loop then it will throw an exception and rollback the transaction.
I'm using JBoss. Set your datasource in your standalone.xml or domain.xml to have
<datasource jta="true" ...>
Seems obvious, but I obviously set it wrong a long time ago and forgot about it.

NHibernate Load vs. Get behavior for testing

In simple tests I can assert whether an object has been persisted by whether it's Id is no longer at it's default value. But delete an object and want to check whether the object and perhaps its children are really not in the database, the object Id's will still be at their saved values.
So I need to go to the db, and I would like a helper assertion to make the tests more readable, which is where the question comes in. I like the idea of using Load to save the db call, but I'm wondering if the ensuing exceptions can corrupt the session.
Below are how the two assertions would look, I think. Which would you use?
Cheers,
Berryl
Get
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
var found = session.Get<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
Load
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
try
{
var found = session.Load<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
catch (GenericADOException)
{
// nothing
}
catch (ObjectNotFoundException)
{
// nothing
}
}
edit
In either case I would be doing the fetch (Get or Load) in a new session, free of state from the session that did the save or delete.
I am trying to test cascade behavior, NOT to test NHib's ability to delete things, but maybe I am over thinking this one or there is a simpler way I haven't thought of.
Your code in the 'Load'-section will always hit Assert.Fail, but never throw an exception as Load<T> will return a proxy (with the Id-property set - or populated from the 1st level cache) without hitting the DB - ie. ISession.Load will only fail, if you access a property other than your Id-property on a deleted entity.
As for your 'Get'-section - I might be mistaken, but I think that if you delete an entity in a session - and later try to use .Get in the same session - you will get the one in 1st level cache - and again not return null.
See this post for the full explanation about .Load and .Get.
If you really need to see if it is in your DB - use a IStatelessSession - or launch a child-ISession (which will have an empty 1st level cache.
EDIT: I thought of a bigger problem - your entity will first be deleted when the transaction is committed (when the session is flushed per default) - so unless you manually flush your session (not recommended), you will still have it in your DB.
Hope this helps.