Perform work in a XA transaction - jpa-2.0

I have a situation where I need to perform some work in a global tx.
For this reason, i have the following PersistenceUnit defined in my persistence.xml to get me a jta entityManager.
<persistence-unit name="resubEclipselink" transaction-type="JTA">
<jta-data-source>jdbc/XADataSource</jta-data-source>
...
</persistence-unit>
Now to persist I tried to proceed as follows #1:
if(isXA()){
mXAEntityManager.persist(entity);
mXAEntityManager.flush();
}
Things fail with an exception
javax.persistence.TransactionRequiredException:
Exception Description: No transaction is currently active
at
org.eclipse.persistence.internal.jpa.transaction.EntityTransactionWrapper.throwCheckTransactionFailedException(EntityTransactionWrapper.java:113)
I got the same reason when I begin a user transaction before proceeding.
So I tried another approach #2:
// get the transactional unit of work or null.
UnitOfWork uow = mEntityManager.getUnitOfWork();
uow.registerObject(entity);
uow.writeChanges();
uow.commit();
This sort of works but I am not sure if this is the right approach.
Will appreciate if someone can help me with why things don't work in the first case and if the second approach is fine?

The first approach should work, a configuration setting is likely missing telling Eclipselink about the transaction. Check that you have specified the target server property described here
http://eclipse.org/eclipselink/documentation/2.4/jpa/extensions/p_target_server.htm
As this will be used to get the transaction manager and register with active transactions.
Once correctly registered with the transaction, there should be no need to call commit on a UnitOfWork directly.

I assume that earlier in your code you instantiated mXAEntityManager the SE way like this:
EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("resubEclipselink");
EntityManager mXAEntityManager = entityManagerFactory.createEntityManager();
which does not work in an EE application deployed in an application server. You should replace those 2 lines with dependency injection like this:
#PersistenceContext(unitName = "resubEclipselink")
EntityManager mXAEntityManager;
so that you allow the container to inject a persistence context (mXAEntityManager) accompanied with all the transaction management you need. Thus no flush() no begin() no commit() calls will be needed any more, as long as you don't mess with the default which is already set to be like this:
#PersistenceContext(unitName = "resubEclipselink", type= PersistenceContextType.TRANSACTION)
EntityManager mXAEntityManager;
As well as another default already set for your session bean like this:
#TransactionAttribute(TransactionAttributeType.REQUIRED)
Remember these default is what makes developing an EE application following the paradigm "Configuration by Exception".

my scenario runs in a way where I need to decide the transaction boundary based on some other conditions.
So basically I need to have my application-managed entity manager enlist onto the global tx and perform some unit of work as part of the global tx. So I actually get an entity manager and set it to use the ExternalTransactionController.
if(isXA){
this.mEntityManager = JpaHelper.getEntityManager(Persistence.createEntityManagerFactory(XA_PU).
createEntityManager());
this.mServerSession = this.mEntityManager.getServerSession();
this.mServerSession.getLogin().setUsesExternalTransactionController(true);
this.mServerSession.getLogin().setUsesExternalConnectionPooling(true);
}else{
this.mEntityManager = JpaHelper.getEntityManager(Persistence.createEntityManagerFactory(RESORUCE_LOCAL_PU).
createEntityManager());
this.mEntityTransaction = this.mEntityManager.getTransaction();
this.mServerSession = this.mEntityManager.getServerSession();
}
and then acquire and perform the unit of work as...
// get the transactional unit of work or null.
UnitOfWork uow = mEntityManager.getUnitOfWork();
// resource-local em
if(uow == null){
mEntityTransaction.begin();
mEntityManager.persist(entity);
mEntityTransaction.commit();
}else{
uow.registerObject(entity);
uow.writeChanges();
uow.commit();
}
As I mentioned before, I am not too confident about this approach and so will really appreciate if someone can review this and let me know if this approach has some flaws...

Related

How to unit test Service Fabric Actor with State

I've started writing unit tests for new actor with state. The state is initialised in the OnActivateAsync method which is called by Service Fabric when the Actor is activated.
When unit testing, I'm creating the Actor myself and as the method is protected I don't have access from my unit test to call this method myself.
I'm wondering on the usual approach for this kind of testing. I could mock the Actor and mock the state, but for the code I want to test call the original. Am wondering if there is another approach I've not come across.
Another approach would be to move the State initialisation to somewhere else like a public method or in the constructor but the template for an Actor has the code there so it may be a best practice.
Use the latest version of ServiceFabric.Mocks NuGet package. It contains special extension to invoke OnActivateAsync protected method and the whole tool set for ServiceFabric unit testing.
var svc = MockActorServiceFactory.CreateActorServiceForActor<MyActor>();
var actor = svc.Activate(new ActorId(Guid.NewGuid()));
actor.InvokeOnActivateAsync().Wait();
I like to use the InternalsVisibleTo attribute and an internal method on the actor, which calls the OnActivateAsync method.
In the target Actor project, AssemblyInfo.cs add a line like this:
[assembly: InternalsVisibleTo("MyActor.Test")]
Where "MyActor.Test" is the name of the test project you want to grant access to your internal members.
In the target Actor class add a method something like this:
internal Task InvokeOnActivateAsync()
{
return OnActivateAsync();
}
This way you can invoke the OnActivateAsync method from your test project something like this:
var actor = CreateNewActor(id);
actor.InvokeOnActivateAsync()
I appreciate this is not ideal, but you can use reflection to call the OnActivateAsync() method.
For example,
var method = typeof(ActorBase).GetMethod("OnActivateAsync", BindingFlags.Instance | BindingFlags.NonPublic);
await (Task)method.Invoke(actor, null);
This way you'll be testing the actual method you want to test and also won't be exposing methods you don't really want to expose.
You may find it useful to group the creation of the actor and the manual call to OnActivateAsync() in a single method so that it's used across your test suite and it mimics the original Service Fabric behaviour.

Web Sphere does not commit JPA transaction

Could someone explain to me why Web Sphere Application Server 8.5.5 does not commit (or even begin?) transactions in JTA mode.
I have a dao class annotated with
#Stateless
#TransactionManagement(value = TransactionManagementType.CONTAINER)
And I have a method annotated with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW). The method simply inserts some entities into the database (if they do not exist yet).
for (MyEntity entity : entities) {
if (validate(entity) { // Programmatic bean validation, returns true when ok
getEntityManager().persist(entity);
}
}
Tests run with Arquillian in Embedded GlassFish, this works perfectly. I can breakpoint stop the code in Eclipse (Luna & Kepler) after this method completes and check the db that there is data. The data used in the test is identical to the data used when deployed on WAS. (Validation errors are shown correctly when tested separately)
According to instructions (http://docs.oracle.com/javaee/6/tutorial/doc/bncij.html)
The code does not include statements that begin and end the transaction...
I probably can't understand this correctly as I have to explicitly wrap the method contents with these:
getEntityManager().getTransaction().begin();
... The persist loop ...
getEntityManager().getTransaction().commit();
...to make the the persisting work.
If I do not do this, there is nothing put in to the database.
I also injected an extra resource for checking the transaction status
#Resource
private TransactionSynchronizationRegistry tsr;
and put this at the end of the method
System.out.println("Transaction status: " + tsr.getTransactionStatus());
getEntityManager().flush();
The output was this:
Transaction status: 0
where 0 = Status.STATUS_ACTIVE
However at the 'flush', an excpetion was thrown:
javax.persistence.TransactionRequiredException:
Exception Description: No transaction is currently active
I spent days trying to figure this out on WAS, while I had it all the time working with the embedded GlassFish (v3) tests.
Both using JavaEE6 (and java 6), though for the debug in Eclipse I have to switch to JavaEE7 + Java7.
Prior to this in another project I have done similar code on GlasFish v4 without any kind of problems.
So could someone clarify me if there are some WAS specific requirements to make this work, or do I just need to do the exact opposite with WAS than the instructions say and how I understand things should work?
I have already the following configuration on WAS:
(admin console)
server > server types > WebSphere application servers > server1 > Container Services > Default Java Persistence API settings > Default JTA data source JNDI name = 'jdbc/kr' (the same as configured in my persistence.xml)
resources > JDBC > JDBC providers > Oracle JDBC Driver (pings ok)
(When this was created) the 'Implementation type' was set to 'connection Pool Datasource', but I also tried this using the 'XA'.
// UPDATE
The getEntityManager-method simply returns the injected entity manager from the super class.
public abstract GenericDAO<T extends GenericEntity> {
#PersistenceContext
private EntityManager em;
...
public EntityManager getEntityManager() {
return this.em;
}
}
// GenericEntity is an interface to force the entities to have the "get all" named query.
The class uses generic dao -pattern (you get the idea from this Single DAO & generic CRUD methods (JPA/Hibernate + Spring), though I have my own modifications as it's an abstract class with default CRUD methods).
When the metdhod getEntityManager is used instead of directly accessing the resource, it's possible to override the entity manager used in the super class if the real dao-class decides to use it's own. => Also the super class has getEntityManager calls and if you override this in implementing class, it will get the same em in the abstract what the actual implementing class uses. Also this method is usable in tests when you can get the em and evict data when needed.
Also this way you can easily add logging when em is accessed (logging interceptor).
// UPDATE 2
Occurred to my mind that there is a separate resource manager used to get remote resources (ejb's). This is so that the location of the ejb is configurable from a property file. However the inner-injection still works within the ejb of this service of mine.
I started thinking that could this cause somehow that the container losses it's transaction handling ability?
Also I noted that there is a #Singleton scoped bean along the path using the actual transactional resources. I could not find a clear explanation on what scopes the beans should be (probably there is not any kind of requirement), but I ended up with understanding that the dao should be #Stateless.
In JavaEE7 this is much more clearer as there is the #Transactional annotation for pointing this.

Service #Transactional exception translation

I have a web service with an operation that looks like
public Result checkout(String id) throws LockException;
implemented as:
#Transactional
public Result checkout(String id) throws LockException {
someDao.acquireLock(id); // ConstraintViolationException might be thrown on commit
Data data = otherDao.find(id);
return convert(data);
}
My problem is that locking can only fail on transaction commit which occurs outside of my service method so I have no opportunity to translate the ConstraintViolationException to my custom LockException.
Option 1
One option that's been suggested is to make the service delegate to another method that's #Transactional. E.g.
public Result checkout(String id) throws LockException {
try {
return someInternalService.checkout(id);
}
catch (ConstraintViolationException ex) {
throw new LockException();
}
}
...
public class SomeInternalService {
#Transactional
public Result checkout(String id) {
someDao.acquireLock(id);
Data data = otherDao.find(id);
return convert(data);
}
}
My issues with this are:
There is no reasonable name for the internal service that isn't already in use by the external service since they are essentially doing the same thing. This seems like an indicator of bad design.
If I want to reuse someInternalService.checkout in another place, the contract for that is wrong because whatever uses it can get a ConstraintViolationException.
Option 2
I thought of maybe using AOP to put advice around the service that translates the exception. This seems wrong to me though because checkout needs to declare that it throws LockException for clients to use it, but the actual service will never throw this and it will instead be thrown by the advice. There's nothing to prevent someone in the future from removing throws LockException from the interface because it appear to be incorrect.
Also, this way is harder to test. I can't write a JUnit test that verifies an exception is thrown without creating a spring context and using AOP during the tests.
Option 3
Use manual transaction management in checkout? I don't really like this because everything else in the application is using the declarative style.
Does anyone know the correct way to handle this situation?
There's no one correct way.
A couple more options for you:
Make the DAO transactional - that's not great, but can work
Create a wrapping service - called Facade - whose job it is to do exception handling/wrapping around the transactional services you've mentioned - this is a clear separation of concerns and can share method names with the real lower-level service

Design Practice: code to create before deletion in deletion test case?

I have written test case for deletion of entity. In test case I simply pick first record by select query and pass its id to deletion method. Entity I want to delete can have some child entities restricting it from deletion. So I suppose I should create a entity first in my deletion test case and destroy same then so that I don't face issues of child dependency.
Is it good practice to write code for creation of entity before deletion. Its kind of testing creation method before deletion method.Please suggest
Edit:
I am working on Rail platform, so I have features like loading database with fixtures (not using currently, facing some error with same, see this https://stackoverflow.com/questions/5288142/rails-fixture-expects-table-name-to-be-prefixed-with-module-name-how-to-disable ). And yes I am using configuration to restore database state after test case run.
In unit-testing, you usually perform some sort of set-up before you run your tests.
Many testing frameworks support this sort of operation. Normally you don't do it through external queries though; for instance, you could directly create an object with certain properties, instead of performing an externally-exposed create query.
Because you directly create the object in the first place, you are not testing your creation query code (unless the way you internally create objects is flawed, but if you are concerned about that, you can test it too), and your deletion code is the only thing being tested.
In test case i simply pick first
record by select query
This is wrong. You should not execute queries during unit testing.
Test that I see can be:
Delete an existent;
Delete a non
existent Entity;
Delete a child;
Delete a non existent child;
If your unit testing framework allows test dependencies, i.e. run test X only if test Y passes and pass Y's return value as a parameter to X, you can get away with it. Here's how that would look in PHP:
function setUp() {
$this->dao = new UserDao(...);
}
function testCreate() {
$user = $this->dao->create('Bob');
assertThat($user, notNullValue());
// more assertions about the new user
return $user->getId();
}
/**
* #depends testCreate
*/
function testDelete($id) {
assertThat($this->dao->delete($id), is(true));
}
PHPUnit will skip testDelete() if testCreate() fails. This is a good work-around if you cannot setup a standard test data set before each test run.
Is it good practice to write code for creation of entity before deletion. Its kind of testing creation method before deletion method.Please suggest
Yes, it's good practice to create the entity whose deletion you are testing, so that the test does not depend on external state, and is repeatable independent of other tests.
This doesn't test creation, but uses creation in order to set up for testing deletion.
If you have multiple tests relying on the same data, the creation can be pulled out to a method that you call in each of your tests that needs that data. Most test frameworks also have a mechanism for specifying setup methods that are run before each test, and you could put the creation there if the data is needed for all tests in a test class.

How to use "Pex and Moles" library with Entity Framework?

This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx