jpa FlushModeType COMMIT - jpa-2.0

In FlushModeType.AUTO mode, the persistence context is synchronized with the database at the following times:
before each SELECT operation
at the end of a transaction
after a flush or close operation on the persistence context
In FlushModeType.COMMIT mode, means that it does not have to flush
the persistence context before executing a query because you have indicated that there is no changed data in memory that would affect the results of the database query.
I have made an example in jboss as 6.0:
#Stateless
public class SessionBeanTwoA implements SessionBeanTwoALocal {
#PersistenceContext(unitName = "entity_manager_trans_unit")
protected EntityManager em;
#EJB
private SessionBeanTwoBLocal repo;
#Override
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void findPersonByEmail(String email) {
1. List<Person> persons = repo.retrievePersonByEmail(email);
2. Person person = persons.get(0);
3. System.out.println(person.getAge());
4. person.setAge(2);
5. persons = repo.retrievePersonByEmail(email);
6. person=persons.get(0);
7. System.out.println(person.getAge());
}
}
#Stateless
public class SessionBeanTwoB extends GenericCrud implements SessionBeanTwoBLocal {
#Override
public List<Person> retrievePersonByEmail(String email) {
Query query = em.createNamedQuery("Person.findAllPersonByEmail");
query.setFlushMode(FlushModeType.COMMIT);
query.setParameter("email", email);
List<Person> persons;
persons = query.getResultList();
return persons;
}
}
FlushModeType.COMMIT does not seem to work. At line 1., the person's age is taken from the database and print 35 at line3. At line 4., the person is updated within the persistent context but in line 7. the person's age is 2.
The jpa 2.0 spec says:
Type.COMMIT is set, the effect of updates made to entities in the persistence context upon queries is
unspecified.
But in many books, they explains what I wrote in the beginning of this post.
So what FlushModeType COMMIT really does?
Tks in advance for your help.

The javadocs mentions this here for FlushModeType COMMIT
Flushing to occur at transaction commit. The provider may flush at
other times, but is not required to.
So if the provider thinks it should then it can flush even though it is configured to flush on commit. Since for AUTO setting, the provider typically flushes at various times ( which requires expensive traversal of all managed entities - especially if the number is huge- to check if any database updates/deletes needs to be scheduled) so if we are sure that there are no database changes happening then we may use COMMIT setting to cut down on frequent checks for any changes and save on some CPU cycles.

Related

Dynamically adding participants to state in Corda 4.0

Can participants be dynamically added to state inside a flow so that state will be stored in thirdParty vault without using StatesToRecord.ALL_VISIBLE in ReceiveFinalityFlow?
We have done same thing in Corda 2.0, it is not working in Corda 4.0.
Is it not supported in Corda 3.2 onwards? I see #KeepForDJVM is added to ContractState.
I tried to add participants dynamically in IOUState as [iouState.participants.add(thirdParty)] after participants in IOUState is updated as mutableList as [override val participants: MutableList<AbstractParty> = mutableListOf(lender, borrower)] so that IOUState will be stored in ThirdParty vault as well. I am passing flow sessions of both borrower and thirdParty to CollectSigntaureFlow and FinalityFlow as well. IOUFlowTests [flow records the correct IOU in both parties' vaults] is failed with iouState is not found in thridParty vault.
IOUState:
#BelongsToContract(IOUContract::class)
data class IOUState(val value: Int,
val lender: Party,
val borrower: Party,
val thirdParty: Party,
override val linearId: UniqueIdentifier = UniqueIdentifier()):
LinearState, QueryableState {
/** The public keys of the involved parties. */
//override val participants: MutableList<AbstractParty> get() = mutableListOf(lender, borrower)
override val participants = mutableListOf(lender, borrower)
ExampleFlow:
var iouState = IOUState(iouValue, serviceHub.myInfo.legalIdentities.first(), otherParty, thirdParty)
iouState.participants.add(thirdParty)
val txCommand = Command(IOUContract.Commands.Create(), iouState.participants.map { it.owningKey })
val counterparties = iouState.participants.map { it as Party }.filter { it.owningKey != ourIdentity.owningKey }.toSet()
counterparties.forEach { p -> flowSessions.add(initiateFlow(p))}
val fullySignedTx = subFlow(CollectSignaturesFlow(partSignedTx, flowSessions, GATHERING_SIGS.childProgressTracker()))
// Stage 5.
progressTracker.currentStep = FINALISING_TRANSACTION
// Notarise and record the transaction in both parties' vaults.
return subFlow(FinalityFlow(fullySignedTx, flowSessions, FINALISING_TRANSACTION.childProgressTracker()))
Both Counterparties Borrower and ThirdParty receives flow and sign transaction, but does not see thirdParty in Participants list and not stored in ThirdParty vault.
I am expecting ThirdParty should be in Participants list and IOUState should be stored in ThirdParty Vault as well.
In Corda, states are immutable. This means that you cannot dynamically add participants to a given state in the body of a flow. There are other solutions, however, to informing a new third party of the state!
There are two ways to accomplish your goals here:
Create a new IOUState tx output with an updated participant list.
In the body of the flow, you should create a new IOUState with an updated list of participants. You will have to update the IOUState so that participants is a value in the primary constructure. Then you might use a helper method like this to add a participant:
fun addParticipant(partyToAdd: Party): IOUState = copy(participants + partyToAdd)
Here's the important part: you must then include the old IOUState as an input to this transaction and the new IOUState as an output. Corda is based on the UTXO model - the only way to update a state is to mark it as history (use it as an input) and then persist an updated version to the ledger.
Note: as a participant, the informed party will now be able to propose changes to this IOUState - these must be accounted for in the Corda Contract.
Use the SendStateAndRefFlow (Likely the better solution for your issue)
The SendStateAndRefFlow will (as specified in its name) send a state and its associated stateRef to the receiving node. The counterparty (receiving node) must use ReceiveStateAndRefFlow at the correct point in the flow conversation.
subFlow(new SendStateAndRefFlow(counterpartySession, dummyStates));
Both of these methods will cause the receiving node to validate the dependencies of the state (all of the inputs and transactions that comprise its history)

Plugin re-using Target parameter between calls

I've created and deployed a plugin for the Update event of a custom entity but it seems when multiple users update different entities within quick succession the plugin uses the first entity it receives for each call.
To investigate further I added NLog via NuGet and at the beginning of the Execute function I generate a Guid and log the entity Id and the Guid. When I look in the log I can see the same ID and Guid logged 3-4 times before both change.
What I think is happening is the code is being run for each user but using the first entities details, applying only to the first entity.
Why is this happening and how can I stop it? The problem is users are saying the plugin is erratic.
Here is my code:
public class OnUpdateClaimSection : IPlugin
{
private static Logger logger = LogManager.GetCurrentClassLogger();
private string logId = Guid.NewGuid().ToString();
public void Execute(IServiceProvider serviceProvider)
{
try
{
IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);
if (context.InputParameters.Contains("Target") && context.InputParameters["Target"] is Entity)
{
logger.Debug("{0} {1}|{2}|{3}", logId, context.MessageName, context.PrimaryEntityName, Common.GetSystemUserFullName(service, context.UserId));
var entity = context.InputParameters["Target"] as Entity;
logger.Debug("{0} {1}", logId, entity.Id);
var claimSection = GetClaimSection(service, entity.ToEntity<ClaimSection>());
CalculateClaimTotals(service, claimSection);
}
}
catch (Exception ex)
{
logger.Error("{0} Exception : {1}", logId, ex.Message);
throw;
}
}
}
Plugin classes are instantiated once by the CRM platform and are then reused for requests. Therefore you must be very careful when using class field variables, because they are not guaranteed to be thread-safe.
In your example field logId is modified in the Execute method. Race conditions of multiple threads are causing the effects you describe.
I suggest to only use plugin class fields when you have made sure that their implementation is absolutely thread-safe.

What's the lazy strategy and how does it work?

I have a problem. I'm learning JPA. I'm using embedded OpenEJB container in unit tests, but only working is #OneToMany(fetch=EAGER). Otherwise is the collection allways null. I haven't found, how the lazy strategy works, how the container fills the data and in which circumstances triggers the container the loading action?
I have read, that the action triggers when the getter is being called. But when I have the code:
#OneToMany(fetch = LAZY, mappedBy="someField")
private Set<AnotherEntities> entities = new Set<AnotherEntities>();
...
public Set<AnotherEntities> getEntities() {
return entities;
}
I'm always getting null. I thing, the LAZY strategy cannot be tested with embedded container. The problem might be also in the bidirectional relation.
Does have anybody else similar expiriences with the JPA testing?
Attachments
The real test case with setup:
#RunWith(UnitilsJUnit4TestClassRunner.class)
#DataSet("dataSource.xml")
public class UnitilsCheck extends UnitilsJUnit4 {
private Persister prs;
public UnitilsCheck() {
Throwable err = null;
try {
Class.forName("org.hsqldb.jdbcDriver").newInstance();
Properties props = new Properties();
props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.openejb.client.LocalInitialContextFactory");
props.put("ds", "new://Resource?type=DataSource");
props.put("ds.JdbcDriver", "org.hsqldb.jdbcDriver");
props.put("ds.JdbcUrl", "jdbc:hsqldb:mem:PhoneBookDB");
props.put("ds.UserName", "sa");
props.put("ds.Password", "");
props.put("ds.JtaManaged", "true");
Context context = new InitialContext(props);
prs = (Persister) context.lookup("PersisterImplRemote");
}
catch (Throwable e) {
e.printStackTrace();
err = e;
}
TestCase.assertNull(err);
}
#Test
public void obtainNickNamesLazily() {
TestCase.assertNotNull(prs);
PersistableObject po = prs.findByPrimaryKey("Ferenc");
TestCase.assertNotNull(po);
Collection<NickNames> nicks = po.getNickNames();
TestCase.assertNotNull(nicks);
TestCase.assertEquals("[Nick name: Kutyafája, belongs to Ferenc]", nicks.toString());
}
}
The bean Presister is the bean mediating access to the entity beans. The crucial code of class follows:
#PersistenceUnit(unitName="PhonePU")
protected EntityManagerFactory emf;
public PhoneBook findByPrimaryKey(String name) {
EntityManager em = emf.createEntityManager();
PhoneBook phonebook = (PhoneBook)em.find(PhoneBook.class, name);
em.close();
return phonebook;
}
Entity PhoneBook is one line of phone book (also person). One person can have zero or more nick names. With EAGER strategy it works. With LAZY the collection is allways null. May be the problem is in the detaching of objects. (See OpenEJB - JPA Concepts, part Caches and detaching.) But in the manual is written, that the collection can be sometimes (more like manytimes) empty, but not null.
The problem is in the life cycle of an entity. (Geronimo uses OpenJPA, so le't see OpenJPA tutorial, part Entity Lifecycle Management.) The application uses container managed transactions. Each method call on the bean Persiser runs in an own transation. And the persistency context depends on the transaction. The entity is disconnected from its context at the end of the transaction, thus at the end of the method. I tried to get the entity and on second line in the same method to get the collection of nick names and it worked. So the problem was identifyed: I cannot get additionally any entity data from the data store without re-attaching the entity to some persistency context. The entity is re-attached by the EntityManager.merge() method.
The code needs more correctures. Because the entity cannot obtain the EntityManager reference and re-attach itself, the method returning nick names must be moved to the Persister class. (The comment Heureka marks the critical line re-attaching the entity.)
public Collection<NickNames> getNickNamesFor(PhoneBook pb) {
//emf is an EntityManagerFactory reference
EntityManager em = emf.createEntityManager();
PhoneBook pb = em.merge(pb); //Heureka!
Collection<NickNames> nicks = pb.getNickNames();
em.close();
return nicks;
}
The collection is then obtained in this way:
//I have a PhoneBook instance pb
//pb.getNickNames() returns null only
//I have a Persister instance pe
nicks = pe.getNickNames(pb);
That's all.
You can have a look at my second question concerning this topic I'have asked on this forum. It is the qustion OpenJPA - lazy fetching does not work.
How I would write the code
#Entity
public class MyEntity {
#OneToMany(fetch = LAZY, mappedBy="someField")
private Set<AnotherEntities> entities;
// Constructor for JPA
// Fields aren't initalized here so that each em.load
// won't create unnecessary objects
private MyEntity() {}
// Factory method for the rest
// Have field initialization with default values here
public static MyEntity create() {
MyEntity e = new MyEntity();
e.entities = new Set<AnotherEntities>();
return e;
}
public Set<AnotherEntities> getEntities() {
return entities;
}
}
Idea no 2:
I just thought that the order of operations in EAGER and LAZY fetching may differ i.e. EAGER fetching may
Declare field entities
Fetch value for entities (I'd assume null)
Set value of entities to new Set<T>()
while LAZY may
Declare field `entities
set value of entities to new Set<T>()
Fetch value for entities (I'd assume null)'
Have to find a citation for this as well.
Idea no 1: (Not the right answer)
What if you'd annotate the getter instead of the field? This should instruct JPA to use getters and setters instead of field access.
In the Java Persistence API, an entity can have field-based or
property-based access. In field-based access, the persistence provider
accesses the state of the entity directly through its instance
variables. In property-based access, the persistence provider uses
JavaBeans-style get/set accessor methods to access the entity's
persistent properties.
From The Java Persistence API - A Simpler Programming Model for Entity Persistence

EF Code First issue on CommitTransaction - using Repository pattern

I am having an issue with EF 4.1 using "Code First". Let me setup my situation before I start posting any code. I have my DBContext class, called MemberSalesContext, in a class library project called Data.EF. I have my POCOs in a seperate class library project called Domain. My Domain project knows nothing of Entity Framework, no references, no nothing. My Data.EF project has a reference to the Domain project so that my DB context class can wire up everything in my mapping classes located in Data.EF.Mapping. I am doing all of the mappings in this namespace using the EntityTypeConfiguration class from EntityFramework. All of this is pretty standard stuff. On top of Entity Framework, I am using the Repository pattern and the Specification pattern.
My SQL Server database table has a composite primary key defined. The three columns that are part of the key are Batch_ID, RecDate, and Supplier_Date. This table as an identity column (database generated value => +1) called XREF_ID, which is not part of the PK.
My mapping class, located in Data.EF.Mapping looks like the following:
public class CrossReferenceMapping : EntityTypeConfiguration<CrossReference>
{
public CrossReferenceMapping()
{
HasKey(cpk => cpk.Batch_ID);
HasKey(cpk => cpk.RecDate);
HasKey(cpk => cpk.Supplier_Date);
Property(p => p.XREF_ID).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
ToTable("wPRSBatchXREF");
}
}
My MemberSalesContext class (inherits from DBContext) looks like the following:
public class MemberSalesContext : DbContext, IDbContext
{
//...more DbSets here...
public DbSet<CrossReference> CrossReferences { get; set; }
//...more DbSets here...
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Conventions.Remove<IncludeMetadataConvention>();
//...more modelBuilder here...
modelBuilder.Configurations.Add<CrossReference>(new CrossReferenceMapping());
//...more modelBuilder here...
}
}
I have a private method in a class that uses my repository to return a list of objects that get iterated over. The list I am referring to is the outermost foreach loop in the example below.
private void CloseAllReports()
{
//* get list of completed reports and close each one (populate batches)
foreach (SalesReport salesReport in GetCompletedSalesReports())
{
try
{
//* aggregate sales and revenue by each distinct supplier_date in this report
var aggregates = BatchSalesRevenue(salesReport);
//* ensure that the entire SalesReport breaks out into Batches; success or failure per SalesReport
_repository.UnitOfWork.BeginTransaction();
//* each salesReport here will result in one-to-many batches
foreach (AggregateBySupplierDate aggregate in aggregates)
{
//* get the batch range (type) from the repository
BatchType batchType = _repository.Single<BatchType>(new BatchTypeSpecification(salesReport.Batch_Type));
//* get xref from repository, *if available*
//* some will have already populated the XREF
CrossReference crossReference = _repository.Single<CrossReference>(new CrossReferenceSpecification(salesReport.Batch_ID, salesReport.RecDate, aggregate.SupplierDate));
//* create a new batch
PRSBatch batch = new PRSBatch(salesReport,
aggregate.SupplierDate,
BatchTypeCode(batchType.Description),
BatchControlNumber(batchType.Description, salesReport.RecDate, BatchTypeCode(batchType.Description)),
salesReport.Zero_Sales_Flag == false ? aggregate.SalesAmount : 1,
salesReport.Zero_Sales_Flag == false ? aggregate.RevenueAmount : 0);
//* populate CrossReference property; this will either be a crossReference object, or null
batch.CrossReference = crossReference;
//* close the batch
//* see PRSBatch partial class for business rule implementations
batch.Close();
//* check XREF to see if it needs to be added to the repository
if (crossReference == null)
{
//*add the Xref to the repository
_repository.Add<CrossReference>(batch.CrossReference);
}
//* add batch to the repository
_repository.Add<PRSBatch>(batch);
}
_repository.UnitOfWork.CommitTransaction();
}
catch (Exception ex)
{
//* log the error
_logger.Log(User, ex.Message.ToString().Trim(), ex.Source.ToString().Trim(), ex.StackTrace.ToString().Trim());
//* move on to the next completed salesReport
}
}
}
All goes well on the first iteration of the outer loop. On the second iteration of the outer loop, the code fails at _repository.UnitOfWork.CommitTransaction(). The error message returned is the following:
"The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges."
In this situation, the database changes on the second iteration were not committed successfully, but the changes in the first iteration were. I have ensured that objects in the outer and inner loops are all unique, adhering to the database primary keys.
Is there something that I am missing here? I am willing to augment my code samples, if it proves helpful. I have done everything within my capabilities to troubleshoot this issue, minus modifying the composite primary key set on the database table.
Can anyone help??? Much thanks in advance! BTW, sorry for the long post!
I am answering my own question here...
My issue had to do with how the composite primary key was being defined in my mapping class. When defining a composite primary key using EF Code First, you must define it like so:
HasKey(cpk => new { cpk.COMPANYID, cpk.RecDate, cpk.BATTYPCD, cpk.BATCTLNO });
As opposed to how I had it defined previously:
HasKey(cpk => cpk.COMPANYID);
HasKey(cpk => cpk.RecDate);
HasKey(cpk => cpk.BATTYPCD);
HasKey(cpk => cpk.BATCTLNO);
The error I was receiving was that the ObjectContext contained multiple elements of the same type that were not unique. This became an issue in my UnitOfWork on CommitTransaction. This is because when the mapping class was instanciated from my DBContext class, it executed 4 HasKey statements shown above, with only the last one for property BATCTLNO becoming the primary key (not composite). Defining them inline, as in my first code sample above, resolves the issue.
Hope this helps someone!

AbstractTransactionalJUnit4SpringContextTests: can't get the dao to find inserted data

I'm trying to set up integration tests using the AbstractTransactionalJUnit4SpringContextTests base class. My goal is really simple: insert some data into the database using the simpleJdbcTemplate, read it back out using a DAO, and roll everything back. JPA->Hibernate is the persistence layer.
For my tests, I've created a version of the database that has no foreign keys. This should speed up testing by reducing the amount of fixture setup for each test; at this stage I'm not interested in testing the DB integrity, just the business logic in my HQL.
/* DAO */
#Transactional
#Repository("gearDao")
public class GearDaoImpl implements GearDao {
#PersistenceContext
private EntityManager entityManager;
/* Properties go here */
public Gear findById(Long id) {
return entityManager.find(Gear.class, id);
}
}
/* Test Page */
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"/com/dom/app/dao/DaoTests-context.xml"})
#TransactionConfiguration(transactionManager="transactionManager", defaultRollback=false)
public class GearDaoImplTests extends AbstractTransactionalJUnit4SpringContextTests {
#Autowired
private GearDao gearDao;
#Test
#Rollback(true)
public void quickTest() {
String sql;
// fields renamed to protect the innocent :-)
sql = "INSERT INTO Gear (Gear_Id, fld2, fld3, fld4, fld5, fld6, fld7) " +
" VALUES (?,?,?,?,?,?,?)";
simpleJdbcTemplate.update(sql, 1L, 1L, 1L, "fld4", "fld5", new Date(), "fld7");
assertEquals(1L, simpleJdbcTemplate.queryForLong("select Gear_Id from Gear where Gear_Id = 1"));
System.out.println(gearDao);
Gear gear = gearDao.findById(1L);
assertNotNull("gear is null.", gear); // <== This fails.
}
}
The application (a Spring MVC site) works fine with the DAO's. What could be happening? And where would I begin to look for a solution?
The DAO somehow has a different dataSource than the simpleJdbcTemplate. Not sure how this would be, though, since there's only one dataSource defined in the DaoTests-context.xml file.
Hibernate requires all foreign key relations to be present in order to select out the Gear object. There are a couple of joins that are not present since I'm hardcoding those in fld2/fld3/fld4.
The DAO won't act on the uncommitted data. But why would the simpleJdbcTemplate honor this? I'd assume they both do the same thing.
Underpants gnomes. But where's the profit?
What a difference a couple hours of sleep makes. I woke up and thought "I should check the logs to see what query is actually being executed." And of course it turns out that hibernate was configured to generate some inner joins for a few of the foreign keys. Once I supplied those dependencies it worked like a charm.
I'm loving the automatic rollback on every test concept. Integration tests, here I come!