Service #Transactional exception translation - web-services

I have a web service with an operation that looks like
public Result checkout(String id) throws LockException;
implemented as:
#Transactional
public Result checkout(String id) throws LockException {
someDao.acquireLock(id); // ConstraintViolationException might be thrown on commit
Data data = otherDao.find(id);
return convert(data);
}
My problem is that locking can only fail on transaction commit which occurs outside of my service method so I have no opportunity to translate the ConstraintViolationException to my custom LockException.
Option 1
One option that's been suggested is to make the service delegate to another method that's #Transactional. E.g.
public Result checkout(String id) throws LockException {
try {
return someInternalService.checkout(id);
}
catch (ConstraintViolationException ex) {
throw new LockException();
}
}
...
public class SomeInternalService {
#Transactional
public Result checkout(String id) {
someDao.acquireLock(id);
Data data = otherDao.find(id);
return convert(data);
}
}
My issues with this are:
There is no reasonable name for the internal service that isn't already in use by the external service since they are essentially doing the same thing. This seems like an indicator of bad design.
If I want to reuse someInternalService.checkout in another place, the contract for that is wrong because whatever uses it can get a ConstraintViolationException.
Option 2
I thought of maybe using AOP to put advice around the service that translates the exception. This seems wrong to me though because checkout needs to declare that it throws LockException for clients to use it, but the actual service will never throw this and it will instead be thrown by the advice. There's nothing to prevent someone in the future from removing throws LockException from the interface because it appear to be incorrect.
Also, this way is harder to test. I can't write a JUnit test that verifies an exception is thrown without creating a spring context and using AOP during the tests.
Option 3
Use manual transaction management in checkout? I don't really like this because everything else in the application is using the declarative style.
Does anyone know the correct way to handle this situation?

There's no one correct way.
A couple more options for you:
Make the DAO transactional - that's not great, but can work
Create a wrapping service - called Facade - whose job it is to do exception handling/wrapping around the transactional services you've mentioned - this is a clear separation of concerns and can share method names with the real lower-level service

Related

Kafka Streams - Unit test for ProductionExceptionHandler implementation

I am developing a Kafka streams application that uses Spring Cloud streams for configuration.
In order to handle "RecordTooLargeException" I have implemented a custom ProductionExceptionHandler as below.
public class CustomProductionExceptionHandler implements ProductionExceptionHandler {
#Override
public ProductionExceptionHandlerResponse handle(ProducerRecord<byte[], byte[]> record,
Exception exception) {
if (exception instanceof RecordTooLargeException) {
return ProductionExceptionHandlerResponse.CONTINUE;
}
return ProductionExceptionHandlerResponse.FAIL;
}
#Override
public void configure(Map<String, ?> configs) {
}
}
I have added the following property in my application.yml
spring.kafka:
streams:
properties:
default.production.exception.handler: "com.fd.acquisition.product.availability.product.exception.StreamsRecordProducerErrorHandler"
The code works fine as the default failing behavior is overridden and the processing is continued.
I am trying to write unit test cases to simulate this behaviour.
I am making use of TopologyTestDriver and it has the following configuration.
Properties streamsConfiguration = new Properties();
streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "TopologyTestDriver");
streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "dummy:1234");
streamsConfiguration.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, "200");
streamsConfiguration.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, (10 * 1024 ));
streamsConfiguration.put(StreamsConfig.SEND_BUFFER_CONFIG, (10 ));
streamsConfiguration.put(StreamsConfig.RECEIVE_BUFFER_CONFIG, (10 ));
final String tempDrectory = Files.createTempDirectory("kafka-streams").toAbsolutePath().toString();
streamsConfiguration.setProperty(StreamsConfig.STATE_DIR_CONFIG, tempDrectory);
streamsConfiguration.put(StreamsConfig.DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG,
CustomProductionExceptionHandler.class);
Ideally, the size of the record should throw RecordTooLargeException when I push a large record.
But the MockProducer class is used instead of KafkaProducer and hence the ensureValidRecordSize() method is never called.
Is there any way to override this behaviour or any other way to simulate the RecordTooLargeException ?
I think this is basically a feature missing from MockProducer. So the first option would be to contribute a PR back to Kafka that adds the missing check to MockProducer. It looks that it should be fairly easy to add it. If you are not a developer, alternatively you can create an issue and hope somebody from the community will pick it up.
Either way, you will have to wait for the next Kafka release to get the feature. You have to decide whether you can live without the unit test until then. There could be ways to hack this (use reflection to make TopologyTestDriver.producer visible / non-final and replace it by an "improved" version of MockProducer), but it's probably going to be harder than to just contribute to Kafka.

Web Sphere does not commit JPA transaction

Could someone explain to me why Web Sphere Application Server 8.5.5 does not commit (or even begin?) transactions in JTA mode.
I have a dao class annotated with
#Stateless
#TransactionManagement(value = TransactionManagementType.CONTAINER)
And I have a method annotated with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW). The method simply inserts some entities into the database (if they do not exist yet).
for (MyEntity entity : entities) {
if (validate(entity) { // Programmatic bean validation, returns true when ok
getEntityManager().persist(entity);
}
}
Tests run with Arquillian in Embedded GlassFish, this works perfectly. I can breakpoint stop the code in Eclipse (Luna & Kepler) after this method completes and check the db that there is data. The data used in the test is identical to the data used when deployed on WAS. (Validation errors are shown correctly when tested separately)
According to instructions (http://docs.oracle.com/javaee/6/tutorial/doc/bncij.html)
The code does not include statements that begin and end the transaction...
I probably can't understand this correctly as I have to explicitly wrap the method contents with these:
getEntityManager().getTransaction().begin();
... The persist loop ...
getEntityManager().getTransaction().commit();
...to make the the persisting work.
If I do not do this, there is nothing put in to the database.
I also injected an extra resource for checking the transaction status
#Resource
private TransactionSynchronizationRegistry tsr;
and put this at the end of the method
System.out.println("Transaction status: " + tsr.getTransactionStatus());
getEntityManager().flush();
The output was this:
Transaction status: 0
where 0 = Status.STATUS_ACTIVE
However at the 'flush', an excpetion was thrown:
javax.persistence.TransactionRequiredException:
Exception Description: No transaction is currently active
I spent days trying to figure this out on WAS, while I had it all the time working with the embedded GlassFish (v3) tests.
Both using JavaEE6 (and java 6), though for the debug in Eclipse I have to switch to JavaEE7 + Java7.
Prior to this in another project I have done similar code on GlasFish v4 without any kind of problems.
So could someone clarify me if there are some WAS specific requirements to make this work, or do I just need to do the exact opposite with WAS than the instructions say and how I understand things should work?
I have already the following configuration on WAS:
(admin console)
server > server types > WebSphere application servers > server1 > Container Services > Default Java Persistence API settings > Default JTA data source JNDI name = 'jdbc/kr' (the same as configured in my persistence.xml)
resources > JDBC > JDBC providers > Oracle JDBC Driver (pings ok)
(When this was created) the 'Implementation type' was set to 'connection Pool Datasource', but I also tried this using the 'XA'.
// UPDATE
The getEntityManager-method simply returns the injected entity manager from the super class.
public abstract GenericDAO<T extends GenericEntity> {
#PersistenceContext
private EntityManager em;
...
public EntityManager getEntityManager() {
return this.em;
}
}
// GenericEntity is an interface to force the entities to have the "get all" named query.
The class uses generic dao -pattern (you get the idea from this Single DAO & generic CRUD methods (JPA/Hibernate + Spring), though I have my own modifications as it's an abstract class with default CRUD methods).
When the metdhod getEntityManager is used instead of directly accessing the resource, it's possible to override the entity manager used in the super class if the real dao-class decides to use it's own. => Also the super class has getEntityManager calls and if you override this in implementing class, it will get the same em in the abstract what the actual implementing class uses. Also this method is usable in tests when you can get the em and evict data when needed.
Also this way you can easily add logging when em is accessed (logging interceptor).
// UPDATE 2
Occurred to my mind that there is a separate resource manager used to get remote resources (ejb's). This is so that the location of the ejb is configurable from a property file. However the inner-injection still works within the ejb of this service of mine.
I started thinking that could this cause somehow that the container losses it's transaction handling ability?
Also I noted that there is a #Singleton scoped bean along the path using the actual transactional resources. I could not find a clear explanation on what scopes the beans should be (probably there is not any kind of requirement), but I ended up with understanding that the dao should be #Stateless.
In JavaEE7 this is much more clearer as there is the #Transactional annotation for pointing this.

How are integration tests written for interacting with external API?

First up, where my knowledge is at:
Unit Tests are those which test a small piece of code (single methods, mostly).
Integration Tests are those which test the interaction between multiple areas of code (which hopefully already have their own Unit Tests). Sometimes, parts of the code under test requires other code to act in a particular way. This is where Mocks & Stubs come in. So, we mock/stub out a part of the code to perform very specifically. This allows our Integration Test to run predictably without side effects.
All tests should be able to be run stand-alone without data sharing. If data sharing is necessary, this is a sign the system isn't decoupled enough.
Next up, the situation I am facing:
When interacting with an external API (specifically, a RESTful API that will modify live data with a POST request), I understand we can (should?) mock out the interaction with that API (more eloquently stated in this answer) for an Integration Test. I also understand we can Unit Test the individual components of interacting with that API (constructing the request, parsing the result, throwing errors, etc). What I don't get is how to actually go about this.
So, finally: My question(s).
How do I test my interaction with an external API that has side effects?
A perfect example is Google's Content API for shopping. To be able to perform the task at hand, it requires a decent amount of prep work, then performing the actual request, then analysing the return value. Some of this is without any 'sandbox' environment.
The code to do this generally has quite a few layers of abstraction, something like:
<?php
class Request
{
public function setUrl(..){ /* ... */ }
public function setData(..){ /* ... */ }
public function setHeaders(..){ /* ... */ }
public function execute(..){
// Do some CURL request or some-such
}
public function wasSuccessful(){
// some test to see if the CURL request was successful
}
}
class GoogleAPIRequest
{
private $request;
abstract protected function getUrl();
abstract protected function getData();
public function __construct() {
$this->request = new Request();
$this->request->setUrl($this->getUrl());
$this->request->setData($this->getData());
$this->request->setHeaders($this->getHeaders());
}
public function doRequest() {
$this->request->execute();
}
public function wasSuccessful() {
return ($this->request->wasSuccessful() && $this->parseResult());
}
private function parseResult() {
// return false when result can't be parsed
}
protected function getHeaders() {
// return some GoogleAPI specific headers
}
}
class CreateSubAccountRequest extends GoogleAPIRequest
{
private $dataObject;
public function __construct($dataObject) {
parent::__construct();
$this->dataObject = $dataObject;
}
protected function getUrl() {
return "http://...";
}
protected function getData() {
return $this->dataObject->getSomeValue();
}
}
class aTest
{
public function testTheRequest() {
$dataObject = getSomeDataObject(..);
$request = new CreateSubAccountRequest($dataObject);
$request->doRequest();
$this->assertTrue($request->wasSuccessful());
}
}
?>
Note: This is a PHP5 / PHPUnit example
Given that testTheRequest is the method called by the test suite, the example will execute a live request.
Now, this live request will (hopefully, provided everything went well) do a POST request that has the side effect of altering live data.
Is this acceptable? What alternatives do I have? I can't see a way to mock out the Request object for the test. And even if I did, it would mean setting up results / entry points for every possible code path that Google's API accepts (which in this case would have to be found by trial and error), but would allow me the use of fixtures.
A further extension is when certain requests rely on certain data being Live already. Using the Google Content API as an example again, to add a Data Feed to a Sub Account, the Sub Account must already exist.
One approach I can think of is the following steps;
In testCreateAccount
Create a sub-account
Assert the sub-account was created
Delete the sub-account
Have testCreateDataFeed depend on testCreateAccount not having any errors
In testCreateDataFeed, create a new account
Create the data feed
Assert the data feed was created
Delete the data feed
Delete the sub-account
This then raises the further question; how do I test the deletion of accounts / data feeds? testCreateDataFeed feels dirty to me - What if creating the data feed fails? The test fails, therefore the sub-account is never deleted... I can't test deletion without creation, so do I write another test (testDeleteAccount) that relies on testCreateAccount before creating then deleting an account of its own (since data shouldn't be shared between tests).
In Summary
How do I test interacting with an external API that effects live data?
How can I mock / stub objects in an Integration test when they're hidden behind layers of abstraction?
What do I do when a test fails and the live data is left in an inconsistent state?
How in code do I actually go about doing all this?
Related:
How can mocking external services improve unit tests?
Writing unit tests for a REST-ful API
This is more an additional answer to the one already given:
Looking through your code, the class GoogleAPIRequest has a hard-encoded dependency of class Request. This prevents you from testing it independently from the request class, so you can't mock the request.
You need to make the request injectable, so you can change it to a mock while testing. That done, no real API HTTP requests are send, the live data is not changed and you can test much quicker.
I've recently had to update a library because the api it connects to was updated.
My knowledge isn't enough to explain in detail, but i learnt a great deal from looking at the code. https://github.com/gridiron-guru/FantasyDataAPI
You can submit a request as you would normally to the api and then save that response as a json file, you can then use that as a mock.
Have a look at the tests in this library which connects to an api using Guzzle.
It mocks responses from the api, there's a good deal of information in the docs on how the testing works it might give you an idea of how to go about it.
but basically you do a manual call to the api along with any parameters you need, and save the response as a json file.
When you write your test for the api call, send along the same parameters and get it to load in the mock rather than using the live api, you can then test the data in the mock you created contains the expected values.
My Updated version of the api in question can be found here.
Updated Repo
One of the ways to test out external APIs is as you mentioned, by creating a mock and working against that with the behavior hard coded as you have understood it.
Sometimes people refer to this type of testing as "contract based" testing, where you can write tests against the API based on the behavior you have observed and coded against, and when those tests start failing, the "contract is broken". If they are simple REST based tests using dummy data you can also provide them to the external provider to run so they can discover where/when they might be changing the API enough that it should be a new version or produce a warning about not being backwards compatible.
Ref: https://www.thoughtworks.com/radar/techniques/consumer-driven-contract-testing

Where should I place this code in MVC?

My code works perfectly, BUT. Whats the best practice in this case?
Here is the code that is important.
This is in the controller.
private IProductRepository repository;
[HttpPost]
public ActionResult Delete(int productId) {
Product prod = repository.Products.FirstOrDefault(p => p.ProductID == productId);
if (prod != null) {
repository.DeleteProduct(prod);
TempData["message"] = string.Format("{0} was deleted", prod.Name);
}
return RedirectToAction("Index");
}
This is the repository (both Interface etc)
public interface IProductRepository {
IQueryable<Product> Products { get; }
void SaveProduct(Product product);
void DeleteProduct(Product product);
}
And here comes the repository..... (the part that is important) I want to point out though... that this is not a fakeclass as is pretty clear. The testing is done on fakeclasses.
private EFDbContext context = new EFDbContext();
public IQueryable<Product> Products {
get { return context.Products; }
}
public void DeleteProduct(Product product) {
context.Products.Remove(product);
context.SaveChanges();
}
Well first question:
When doing testing on this, I will make a two TestMethods on the Controller in "ControllerTest". "Can_delete_valid_product" and "Cannot_delete_invalid_product". Is there any point in having a testclass for the repository? Like "RepositoryTest", afterall the controller tests if the deletefunction works no need to test it twice right?
Second question:
In this I test in the controller if the product exists, before trying to delete it. If it exists I call the deletefunction in the repository. This means that there should never be the posibility of an exception. BUT you could still create an exception in the repository if you send down null. (which cant happen here but you could still do it if you forget to check if null). Question is if the testing if product exists should be done in the repository instead?
I prefer to keep logic out of the controller for the most part. A test of the controller action verifies if the repository is called, but the repository itself is mocked in that test. I would make the repository responsible for handling null checking.
Personally I create separate tests for my repositories/data access to ensure that it works properly. The controllers themselves would be tested with mocks.
Actually it's entirely possible (just maybe not that likely) that someone could delete a product just as someone else is trying to delete it. In this case you probably don't care/need to know that someone did though so I would probably just swallow that exception in the repository (though I would log it first). In terms of null checking/defensive programming that's entirely a personal choice. Some people leave checks like that to the entry points of the system where as others will build a layered defense that has additional checks throughout the code. The problem is that these checks can get quite ugly which is a big part of why I wish Code Contracts would gain more traction.
This means that there should never be the posibility of an exception. BUT you could still create an exception in the repository if you send down null. (which cant happen here but you could still do it if you forget to check if null).
Or if it's deleted after you check it exists but before you delete it. Or if you lose connection to the repository (or will the method never return in this case?). You can't avoid exceptions in this way.

mspec & rhino mocks expected exception testing

I'm fairly new to unit testing and can't get around how to test (or if I even should) this case properly.
I have a controller method (pseudo code):
public ActionResult Register(formModel model)
{
if (ModelState.isValid) {
try {
_userService.CreateUser(a bunch of parameters here);
return RedirectToAction(some other action);
}
catch (Exception e)
{
ModelState.AddModelError("",e.Message);
}
}
return View();
}
I have a bunch of separate tests against "_userService". The "CreateUser" method just creates a new user and returns nothing OR throws an exception if there was an error (ex. the user exists) that I bubble up to the controller surround in a try catch and add the exception to the ModelState.
From what I understand I should mock the service and assert that it was called correctly (i use the assertwascalled syntax) since it returns nothing and I just want to know that my controller calls it.
What I'm not sure is how to test that when the userservice throws an error it should not redirect and should add that exception to the modelstate. With rhino mocks you can stub a mock but the book art of unit testing advises against that.
Right now in my test I manually add a model error (not caring if it's from user service) and test that the controller returns the same view if there are errors. Is this the correct way of going about this? Or should I maybe create a separate test where I stub the _userService to throw an error and check it gets added to modelstate? Or should I not even test that case? I feel like I may be just over analyzing the whole thing and testing using the modelstate would be enough to satisfy this...
Your mock represents a collaborating class. I wouldn't get too hung up on the difference between mocks and stubs; it's still a collaborating class.
You can think of your unit tests as describing how to use your class, and how the class then interacts with its collaborators. You have two examples:
Given a controller
When I register the model
Then the class should ask the user service to create a user.
And:
Given a controller
Given the user service is broken
When I register the model
Then the class should attach the error to the model state.
It's that second Given that tells you you're stubbing rather than mocking. You're setting the user service up as though it's broken. The context in which the class acts is different, so you need to stub, and you should indeed throw an exception.
If you put these lines as comments inside your test, it'll make sense. If it makes sense, ignore the book.
BTW, this is unit-level BDD. You can use "Given, When, Then" at a unit level just as at a scenario level, and it might help you think about the logic of your tests. Just don't use BDD scenario tools for this.