Web Sphere does not commit JPA transaction - jpa-2.0

Could someone explain to me why Web Sphere Application Server 8.5.5 does not commit (or even begin?) transactions in JTA mode.
I have a dao class annotated with
#Stateless
#TransactionManagement(value = TransactionManagementType.CONTAINER)
And I have a method annotated with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW). The method simply inserts some entities into the database (if they do not exist yet).
for (MyEntity entity : entities) {
if (validate(entity) { // Programmatic bean validation, returns true when ok
getEntityManager().persist(entity);
}
}
Tests run with Arquillian in Embedded GlassFish, this works perfectly. I can breakpoint stop the code in Eclipse (Luna & Kepler) after this method completes and check the db that there is data. The data used in the test is identical to the data used when deployed on WAS. (Validation errors are shown correctly when tested separately)
According to instructions (http://docs.oracle.com/javaee/6/tutorial/doc/bncij.html)
The code does not include statements that begin and end the transaction...
I probably can't understand this correctly as I have to explicitly wrap the method contents with these:
getEntityManager().getTransaction().begin();
... The persist loop ...
getEntityManager().getTransaction().commit();
...to make the the persisting work.
If I do not do this, there is nothing put in to the database.
I also injected an extra resource for checking the transaction status
#Resource
private TransactionSynchronizationRegistry tsr;
and put this at the end of the method
System.out.println("Transaction status: " + tsr.getTransactionStatus());
getEntityManager().flush();
The output was this:
Transaction status: 0
where 0 = Status.STATUS_ACTIVE
However at the 'flush', an excpetion was thrown:
javax.persistence.TransactionRequiredException:
Exception Description: No transaction is currently active
I spent days trying to figure this out on WAS, while I had it all the time working with the embedded GlassFish (v3) tests.
Both using JavaEE6 (and java 6), though for the debug in Eclipse I have to switch to JavaEE7 + Java7.
Prior to this in another project I have done similar code on GlasFish v4 without any kind of problems.
So could someone clarify me if there are some WAS specific requirements to make this work, or do I just need to do the exact opposite with WAS than the instructions say and how I understand things should work?
I have already the following configuration on WAS:
(admin console)
server > server types > WebSphere application servers > server1 > Container Services > Default Java Persistence API settings > Default JTA data source JNDI name = 'jdbc/kr' (the same as configured in my persistence.xml)
resources > JDBC > JDBC providers > Oracle JDBC Driver (pings ok)
(When this was created) the 'Implementation type' was set to 'connection Pool Datasource', but I also tried this using the 'XA'.
// UPDATE
The getEntityManager-method simply returns the injected entity manager from the super class.
public abstract GenericDAO<T extends GenericEntity> {
#PersistenceContext
private EntityManager em;
...
public EntityManager getEntityManager() {
return this.em;
}
}
// GenericEntity is an interface to force the entities to have the "get all" named query.
The class uses generic dao -pattern (you get the idea from this Single DAO & generic CRUD methods (JPA/Hibernate + Spring), though I have my own modifications as it's an abstract class with default CRUD methods).
When the metdhod getEntityManager is used instead of directly accessing the resource, it's possible to override the entity manager used in the super class if the real dao-class decides to use it's own. => Also the super class has getEntityManager calls and if you override this in implementing class, it will get the same em in the abstract what the actual implementing class uses. Also this method is usable in tests when you can get the em and evict data when needed.
Also this way you can easily add logging when em is accessed (logging interceptor).
// UPDATE 2
Occurred to my mind that there is a separate resource manager used to get remote resources (ejb's). This is so that the location of the ejb is configurable from a property file. However the inner-injection still works within the ejb of this service of mine.
I started thinking that could this cause somehow that the container losses it's transaction handling ability?
Also I noted that there is a #Singleton scoped bean along the path using the actual transactional resources. I could not find a clear explanation on what scopes the beans should be (probably there is not any kind of requirement), but I ended up with understanding that the dao should be #Stateless.
In JavaEE7 this is much more clearer as there is the #Transactional annotation for pointing this.

Related

EnableNeo4jRepositories.sessionFactoryRef is ignored / does nothing

I'm trying to configure a Spring Boot 1.5.9 project with multiple data sources, of which some are Neo4j.
The version of spring-data-neo4j I'm using is 4.2.9.
My goal is to use a different SessionFactory for different repositories, using a different Configuration class for each.
I've got this all working with Mongo but it seems that, even though the sessionFactoryRef is available on #EnableNeo4jRepositories, it simple does not get acted upon.
Abbreviated version of my configuration, with the general concepts:
#org.springframework.context.annotation.Configuration
#EnableNeo4jRepositories(basePackages = "<repo-package-name>", sessionFactoryRef = NEO4J_SESSIONFACTORY_NAME)
public class MyConfiguration {
protected static final String NEO4J_SESSIONFACTORY_NAME = "mySessionFactory";
#Bean(NEO4J_SESSIONFACTORY_NAME)
public SessionFactory mySessionFactory() {
SessionFactory sessionFactory = ...
// passing entity package corresponding to repository
return sessionFactory;
}
As mentioned, this construct works fine with spring-data-mongodb, however in neo4j it first starts out with an error:
***************************
APPLICATION FAILED TO START
***************************
Description:
A component required a bean named 'getSessionFactory' that could not be found.
Action:
Consider defining a bean named 'getSessionFactory' in your configuration.
Turning on debug in the logger and a look through the code led me to SessionBeanDefinitionRegistrarPostProcessor, that contains the following code to get the sessionFactory:
private static String getSessionFactoryBeanRef(ConfigurableListableBeanFactory beanFactory) {
return beanFactory.containsBeanDefinition("sessionFactory") ? "sessionFactory" : "getSessionFactory";
}
Hmmm... hardcoded names for a bean, no sign of customisability.
I then proceeded to name my bean twice, #Bean("sessionFactory", NEO4J_SESSIONFACTORY_NAME), so the above code would pass.
The application started, but the problem is that the repositories get wired with whatever bean is called sessionFactory, effectively not using the sessionFactoryRef on the annotation.
To test this, I changed the name on the annotation to a non-existing bean and it continued to start (if I do this with the mongo-annotation, the application quits because the bean mentioned in mongoTemplateRef isn't available).
I dug a little deeper and found that, for mongo, it retrieves the bean reference in this class. The equivalent neo4j implementation has no such thing. It could of course be an implementation detail but I wasn't able to find any reference to the sessionFactoryRef attribute other than the annotation and the xml-schema.
There are also other places in the config classes that expect only one SessionFactory to be available.
So, in short, it seems to me that EnableNeo4jRepositories.sessionFactoryRef has no implementation and therefore simple doesn't do anything.
As a result, with the current code a single bean "sessionFactory" must be present and all repositories will be wired with this bean, regardless of the value of sessionFactoryRef.
Anybody else with a similar experience or any idea how to file a bug for this?

how to mock a service so as to inject it properly

I'm trying to mock a service within a functional test, which is used by another service:
$client = static::createClient();
$stub = $this->createMock(MailService::class);
$stub->method('sendMailToUser')->willReturn(9);
$client->getContainer()->set('belka.auth_bundle.mail_service', $stub);
// the *real* test should start here
if I try to put a die command inside the original sendMailToUser, what I get is the code stop running, although I tried to mock it by returning 9. What's wrong with it? The service I'm testing has the following declaration, then I guessed the injected service was the one wrote above:
belka.auth_bundle.user_handler:
class: Belka\AuthBundle\Handler\UserHandler
arguments:
- '#belka.auth_bundle.user_repository'
- '#belka.auth_bundle.mail_service'
calls:
- [setRequest, ["#request_stack"]]
- [setSettings, ["#belka.auth_bundle.setting_handler"]]
- [setBodyJsonHandler, ["#belka.container_support_bundle.body_json_handler"]]
- [setQuantityHandler, ["#belka.container_support_bundle.quantityhandler"]]
I trust Symfony to create services correctly, and so I won't usually get services to then test within them (I do have a smoke-test that tries to create almost every service though).
So, if you are trying to get a 'belka.auth_bundle.user_handler', I would manually create the Belka\AuthBundle\Handler\UserHandler instance, with your mock as one of the arguments.
For services that are required deeper within the services, there aren't easy ways to mock them (or to get the mock into place), but you can use the service container environments to override them.
For example, I have a service that tests if a Request has come from a bot - but while running functional tests I replace it entirely with a service that always says 'not a bot', by overriding my 'app.bot_detect.detector' service in /config/services_test.yml
# set the default bot detector to a simple fake, always return false
app.bot_detect.detector:
class: App\Services\BotDetectorNeverBot
In the main config/services.yml, the class would really perform the check, but the method in the test environment's BotDetectorNeverBot class always says false.
In the same way, you could override belka.auth_bundle.mail_service in services_test.yml (or config_test.yml) to not send email, and store something instead. You just have to make sure you are including '*_test.yml' files appropriately.

Grails unit test mock service vs assign service instance

What is the difference between mocking service and assign instance of class to service?
For example:
class MyService {
def CallServiceMethod(){
my business logic
}
}
class MyController {
def myService
def callServiceMethod(){
myService.callServiceMethod()
}
}
#TestFor(MyController)
class MyControllerTests {
#Before
void setup() {
controller?.myService = new MyService()
vs
controller?.myService = mockFor(MyService)
}
void testCallServiceMethod(){
controller.callServiceMethod()
}
}
Any one help me please?
When using Spring, you typically lose a lot of behavior if you create a new instance of a class that's registered as a Spring bean. Beans often have multiple other beans dependency-injected into them and those fields would be null in a plain new instance, and various annotations trigger wrapping the bean instance in one or more proxies that add extra checks and behavior before and/or after your methods are called - and that won't happen with a new instance. These proxies include the transactional wrapper you get with #Transactional, the cache checks from #Cacheable, and security checks from #Secured and other Spring Security annotations.
In addition, Grails adds a lot of code to most artifacts (in particular domain classes and controllers). Most of that is added to the bytecode, but some is added at runtime to the metaclass. Although the bytecode is there for a new instance, it often needs a final bit of configuration at runtime. For example there are over 100 GORM methods added to domain classes, but they don't work by themselves and need to be "hooked up" to the current GORM implementation (Hibernate, MongoDB, etc.) This is why you sometimes see an error like "this class was used outside of a Grails application" - the class for some reason didn't have a GORM impl attached, so it can't function.
mockFor and annotations like #TestFor and #Mock don't add all of this behavior, but they do add a large subset of it, and they add mocked but realistic implementations of many methods. The goal is to give the collaborators of your class under test enough run-app-like behavior that they work essentially like they would in a real app, so you can focus on the class being tested without having to think about configuring a test database, or fake web requests and responses, etc.

Perform work in a XA transaction

I have a situation where I need to perform some work in a global tx.
For this reason, i have the following PersistenceUnit defined in my persistence.xml to get me a jta entityManager.
<persistence-unit name="resubEclipselink" transaction-type="JTA">
<jta-data-source>jdbc/XADataSource</jta-data-source>
...
</persistence-unit>
Now to persist I tried to proceed as follows #1:
if(isXA()){
mXAEntityManager.persist(entity);
mXAEntityManager.flush();
}
Things fail with an exception
javax.persistence.TransactionRequiredException:
Exception Description: No transaction is currently active
at
org.eclipse.persistence.internal.jpa.transaction.EntityTransactionWrapper.throwCheckTransactionFailedException(EntityTransactionWrapper.java:113)
I got the same reason when I begin a user transaction before proceeding.
So I tried another approach #2:
// get the transactional unit of work or null.
UnitOfWork uow = mEntityManager.getUnitOfWork();
uow.registerObject(entity);
uow.writeChanges();
uow.commit();
This sort of works but I am not sure if this is the right approach.
Will appreciate if someone can help me with why things don't work in the first case and if the second approach is fine?
The first approach should work, a configuration setting is likely missing telling Eclipselink about the transaction. Check that you have specified the target server property described here
http://eclipse.org/eclipselink/documentation/2.4/jpa/extensions/p_target_server.htm
As this will be used to get the transaction manager and register with active transactions.
Once correctly registered with the transaction, there should be no need to call commit on a UnitOfWork directly.
I assume that earlier in your code you instantiated mXAEntityManager the SE way like this:
EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("resubEclipselink");
EntityManager mXAEntityManager = entityManagerFactory.createEntityManager();
which does not work in an EE application deployed in an application server. You should replace those 2 lines with dependency injection like this:
#PersistenceContext(unitName = "resubEclipselink")
EntityManager mXAEntityManager;
so that you allow the container to inject a persistence context (mXAEntityManager) accompanied with all the transaction management you need. Thus no flush() no begin() no commit() calls will be needed any more, as long as you don't mess with the default which is already set to be like this:
#PersistenceContext(unitName = "resubEclipselink", type= PersistenceContextType.TRANSACTION)
EntityManager mXAEntityManager;
As well as another default already set for your session bean like this:
#TransactionAttribute(TransactionAttributeType.REQUIRED)
Remember these default is what makes developing an EE application following the paradigm "Configuration by Exception".
my scenario runs in a way where I need to decide the transaction boundary based on some other conditions.
So basically I need to have my application-managed entity manager enlist onto the global tx and perform some unit of work as part of the global tx. So I actually get an entity manager and set it to use the ExternalTransactionController.
if(isXA){
this.mEntityManager = JpaHelper.getEntityManager(Persistence.createEntityManagerFactory(XA_PU).
createEntityManager());
this.mServerSession = this.mEntityManager.getServerSession();
this.mServerSession.getLogin().setUsesExternalTransactionController(true);
this.mServerSession.getLogin().setUsesExternalConnectionPooling(true);
}else{
this.mEntityManager = JpaHelper.getEntityManager(Persistence.createEntityManagerFactory(RESORUCE_LOCAL_PU).
createEntityManager());
this.mEntityTransaction = this.mEntityManager.getTransaction();
this.mServerSession = this.mEntityManager.getServerSession();
}
and then acquire and perform the unit of work as...
// get the transactional unit of work or null.
UnitOfWork uow = mEntityManager.getUnitOfWork();
// resource-local em
if(uow == null){
mEntityTransaction.begin();
mEntityManager.persist(entity);
mEntityTransaction.commit();
}else{
uow.registerObject(entity);
uow.writeChanges();
uow.commit();
}
As I mentioned before, I am not too confident about this approach and so will really appreciate if someone can review this and let me know if this approach has some flaws...

How are integration tests written for interacting with external API?

First up, where my knowledge is at:
Unit Tests are those which test a small piece of code (single methods, mostly).
Integration Tests are those which test the interaction between multiple areas of code (which hopefully already have their own Unit Tests). Sometimes, parts of the code under test requires other code to act in a particular way. This is where Mocks & Stubs come in. So, we mock/stub out a part of the code to perform very specifically. This allows our Integration Test to run predictably without side effects.
All tests should be able to be run stand-alone without data sharing. If data sharing is necessary, this is a sign the system isn't decoupled enough.
Next up, the situation I am facing:
When interacting with an external API (specifically, a RESTful API that will modify live data with a POST request), I understand we can (should?) mock out the interaction with that API (more eloquently stated in this answer) for an Integration Test. I also understand we can Unit Test the individual components of interacting with that API (constructing the request, parsing the result, throwing errors, etc). What I don't get is how to actually go about this.
So, finally: My question(s).
How do I test my interaction with an external API that has side effects?
A perfect example is Google's Content API for shopping. To be able to perform the task at hand, it requires a decent amount of prep work, then performing the actual request, then analysing the return value. Some of this is without any 'sandbox' environment.
The code to do this generally has quite a few layers of abstraction, something like:
<?php
class Request
{
public function setUrl(..){ /* ... */ }
public function setData(..){ /* ... */ }
public function setHeaders(..){ /* ... */ }
public function execute(..){
// Do some CURL request or some-such
}
public function wasSuccessful(){
// some test to see if the CURL request was successful
}
}
class GoogleAPIRequest
{
private $request;
abstract protected function getUrl();
abstract protected function getData();
public function __construct() {
$this->request = new Request();
$this->request->setUrl($this->getUrl());
$this->request->setData($this->getData());
$this->request->setHeaders($this->getHeaders());
}
public function doRequest() {
$this->request->execute();
}
public function wasSuccessful() {
return ($this->request->wasSuccessful() && $this->parseResult());
}
private function parseResult() {
// return false when result can't be parsed
}
protected function getHeaders() {
// return some GoogleAPI specific headers
}
}
class CreateSubAccountRequest extends GoogleAPIRequest
{
private $dataObject;
public function __construct($dataObject) {
parent::__construct();
$this->dataObject = $dataObject;
}
protected function getUrl() {
return "http://...";
}
protected function getData() {
return $this->dataObject->getSomeValue();
}
}
class aTest
{
public function testTheRequest() {
$dataObject = getSomeDataObject(..);
$request = new CreateSubAccountRequest($dataObject);
$request->doRequest();
$this->assertTrue($request->wasSuccessful());
}
}
?>
Note: This is a PHP5 / PHPUnit example
Given that testTheRequest is the method called by the test suite, the example will execute a live request.
Now, this live request will (hopefully, provided everything went well) do a POST request that has the side effect of altering live data.
Is this acceptable? What alternatives do I have? I can't see a way to mock out the Request object for the test. And even if I did, it would mean setting up results / entry points for every possible code path that Google's API accepts (which in this case would have to be found by trial and error), but would allow me the use of fixtures.
A further extension is when certain requests rely on certain data being Live already. Using the Google Content API as an example again, to add a Data Feed to a Sub Account, the Sub Account must already exist.
One approach I can think of is the following steps;
In testCreateAccount
Create a sub-account
Assert the sub-account was created
Delete the sub-account
Have testCreateDataFeed depend on testCreateAccount not having any errors
In testCreateDataFeed, create a new account
Create the data feed
Assert the data feed was created
Delete the data feed
Delete the sub-account
This then raises the further question; how do I test the deletion of accounts / data feeds? testCreateDataFeed feels dirty to me - What if creating the data feed fails? The test fails, therefore the sub-account is never deleted... I can't test deletion without creation, so do I write another test (testDeleteAccount) that relies on testCreateAccount before creating then deleting an account of its own (since data shouldn't be shared between tests).
In Summary
How do I test interacting with an external API that effects live data?
How can I mock / stub objects in an Integration test when they're hidden behind layers of abstraction?
What do I do when a test fails and the live data is left in an inconsistent state?
How in code do I actually go about doing all this?
Related:
How can mocking external services improve unit tests?
Writing unit tests for a REST-ful API
This is more an additional answer to the one already given:
Looking through your code, the class GoogleAPIRequest has a hard-encoded dependency of class Request. This prevents you from testing it independently from the request class, so you can't mock the request.
You need to make the request injectable, so you can change it to a mock while testing. That done, no real API HTTP requests are send, the live data is not changed and you can test much quicker.
I've recently had to update a library because the api it connects to was updated.
My knowledge isn't enough to explain in detail, but i learnt a great deal from looking at the code. https://github.com/gridiron-guru/FantasyDataAPI
You can submit a request as you would normally to the api and then save that response as a json file, you can then use that as a mock.
Have a look at the tests in this library which connects to an api using Guzzle.
It mocks responses from the api, there's a good deal of information in the docs on how the testing works it might give you an idea of how to go about it.
but basically you do a manual call to the api along with any parameters you need, and save the response as a json file.
When you write your test for the api call, send along the same parameters and get it to load in the mock rather than using the live api, you can then test the data in the mock you created contains the expected values.
My Updated version of the api in question can be found here.
Updated Repo
One of the ways to test out external APIs is as you mentioned, by creating a mock and working against that with the behavior hard coded as you have understood it.
Sometimes people refer to this type of testing as "contract based" testing, where you can write tests against the API based on the behavior you have observed and coded against, and when those tests start failing, the "contract is broken". If they are simple REST based tests using dummy data you can also provide them to the external provider to run so they can discover where/when they might be changing the API enough that it should be a new version or produce a warning about not being backwards compatible.
Ref: https://www.thoughtworks.com/radar/techniques/consumer-driven-contract-testing