Grails unit test fails on #PostConstruct - unit-testing

Have a #PostConstruct in the service to ensure that the dependencies have been set up. Dependencies are set in resources.groovy. Unit test fails on #PostConstruct asserts. Tried setting up the dependencies manually in setUpSpec to no avail. Even without a #TestFor, ServiceUnitTestMixin kicks in and merrily chokes on #PostConstruct.
Opened a defect GRAILS-11878 which was closed promptly with an advice to use #FreshRuntime and doWithSpring. If they actually bothered to try, they'd have gotten the following error:
org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'grails.spring.BeanBuilder$ConfigurableRuntimeBeanReference$WrappedPropertyValue#2cce10bc' with class 'grails.spring.BeanBuilder$ConfigurableRuntimeBeanReference$WrappedPropertyValue' to class 'java.util.Collection'
Service under test:
#Transactional
class MovieRipIndexService {
Collection<String> genres
Collection<String> includes
#PostConstruct
void postConstruct() {
notEmpty(genres as Collection, 'Genres must not be null or empty.')
notEmpty(includes as Collection, 'Includes must not be null or empty.')
}
}
Test:
#FreshRuntime
#TestFor(MovieRipIndexService)
class MovieRipIndexServiceSpec extends Specification {
def doWithSpring = {
serviceHelper(ServiceHelper)
service.genres = serviceHelper.genres
service.includes = serviceHelper.includes
}
}

Spring support in unit tests is rather minimal, and the ApplicationContext that's active doesn't really go through any of the lifecycle phases that it would in a running app, or even during initialization of integration tests. You get a lot of functionality mixed into your class when using #TestFor and/or #Mock, but it's almost entirely faked out so you can focus on unit testing the class under test.
I tried implementing org.springframework.beans.factory.InitializingBean just now and that worked, so you might get further with that.#Transactional will also be ignored - the "database" is a ConcurrentHashMap, so you wouldn't get far with that anyway.
If you need real Spring behavior, use integration tests. Unit tests are fast and convenient but only useful for a fairly limited number of scenarios.

Related

Should I unit test classes which extend Sonata BaseEntityManager class?

Here is part of the code which extends BaseEntityManager:
namespace Vop\PolicyBundle\Entity;
use Doctrine\Common\Collections\ArrayCollection;
use Doctrine\Common\Persistence\ObjectRepository;
use Sonata\CoreBundle\Model\BaseEntityManager;
class AdditionalInsuredTypeManager extends BaseEntityManager
{
/**
* #param int $productId
*
* #return ArrayCollection
*/
public function getProductInsuredTypes($productId = null)
{
$repository = $this->getRepository();
$allActiveTypes = $repository->findAllActive();
// other code
}
/**
* #return AdditionalInsuredTypeRepository|ObjectRepository
*/
protected function getRepository()
{
return parent::getRepository();
}
}
And here I am trying to write a unit test:
public function testGetProductInsuredTypes()
{
$managerRegistry = $this->getMockBuilder(\Doctrine\Common\Persistence\ManagerRegistry::class)
->getMock();
$additionalInsuredTypeManager = new AdditionalInsuredTypeManager(
AdditionalInsuredTypeManager::class,
$managerRegistry
);
$additionalInsuredTypeManager->getProductInsuredTypes(null);
}
What are the problems:
I am mocking ManagerRegistry, but I have learned that I should not mock what I do not own. But this is required parameter for constructor.
I am getting error:
Unable to find the mapping information for the class Vop\PolicyBundle\Entity\AdditionalInsuredTypeManager. Please check the 'auto_mapping' option (http://symfony.com/doc/current/reference/configuration/doctrine.html#configuration-overview) or add the bundle to the 'mappings' section in the doctrine configuration.
/home/darius/PhpstormProjects/vop/vendor/sonata-project/core-bundle/Model/BaseManager.php:54
/home/darius/PhpstormProjects/vop/vendor/sonata-project/core-bundle/Model/BaseManager.php:153
/home/darius/PhpstormProjects/vop/src/Vop/PolicyBundle/Entity/AdditionalInsuredTypeManager.php:46
/home/darius/PhpstormProjects/vop/src/Vop/PolicyBundle/Entity/AdditionalInsuredTypeManager.php:21
/home/darius/PhpstormProjects/vop/src/Vop/PolicyBundle/Tests/Unit/Entity/AdditionalInsuredTypeManagerTest.php:22
I do not know how to fix this error, but this really has to do something with extending that BaseEntityManager I assume.
I see the error is caused by this line:
$repository = $this->getRepository();
I cannot even inject the repository from the constructor, because parent constructor has no such parameter.
There is very little amout of information about testing:
https://sonata-project.org/bundles/core/master/doc/reference/testing.html
I can not tell you whether it's useful to test your repositories, neither can I give you pointers for your error other than that you should most likely not extend doctrine's entity manager. If anything use a custom EntityRepository or write a service in which you inject the EntityManager (or better the EntityRegistry):
class MyEntityManager
{
private $entityManager;
public function __construct(EntityManager $entityManager)
{
$this->entityManager = $entityManager;
}
public function getProductInsuredTypes($productId = null)
{
$repository = $this->entityManager->getRepository(Product::class);
$allActiveTypes = $repository->findAllActive();
// other code
}
}
What I can give you is an explanation how I approach testing repositories:
I think unit tests, especially with mocks, seem a bit wasteful. They only tie you to the current implementation and since anything of relevance is mocked out you will most likely not test any behavior.
What might be useful is doing functional tests where you provide a real database connection and then perform a query against the repository to see if it really returns the expected results. I usually do this for more complex queries or when using advanced doctrine features like native queries with a custom ResultsetMap. In this case you would not mock the EntityManager and instead would use Doctrine's TestHelper to create an in memory sqlite-database. This can look something like this:
protected $entityManager;
protected function setUp()
{
parent::setUp();
$config = DoctrineTestHelper::createTestConfiguration();
$config->setNamingStrategy(new UnderscoreNamingStrategy());
$config->setRepositoryFactory(new RepositoryFactory());
$this->entityManager = DoctrineTestHelper::createTestEntityManager($config);
}
The downside is, that you will manually have to register custom types and listeners which means the behavior might differ from your production configuration. Additionally you are still tasked with setting up the schema and providing fixtures for your tests. You will also most likely not use SQLite in production so this is another deviation. The benefit is that you will not have to deal with clearing your database between runs and can easily parallelize running tests, plus it's usually faster and easier than setting up a complete test database.
The final option is somewhat close to the previous one. You can have a test database, which you define in your parameters, environment variables or a config_test.yml, you can then bootstrap the kernel and fetch the entity manager from your DI-container. Symfony's WebTestCase can be used as a reference for this approach.
The downside is, that you will have to boot your kernel and make sure to have separate database setups for development, production and testing to ensure your test data does not mess up anything. You will also have to setup your schema and fixtures and additionally you can easily run into issues where tests are not isolated and start being flakey, e.g. when run in different order or when running only parts of your test suite. Obviously since this is a full integration test through your bootstrapped application the performance footprint compared to a unit test or smaller functional test is noticeably higher and application caches might give you additional headaches.
As a rule of thumb:
I trust Doctrine when doing the most basic kind of queries and therefore do not test repositories for simple find-methods.
When I write critical queries I prefer testing them indirectly on higher layers, e.g. ensure in an acceptance test that the page displays the information.
When I encounter issues with repositories or need tests on lower layers I start with functional tests and then move on to integration tests for repositories.
tl;dr
Unit tests for repositories are mostly pointless (in my opinion)
Functional tests are good for testing simple queries in isolation with a reasonable amount of effort, should you need them.
Integration tests ensure the most production like behavior, but are more painful to setup and maintain

Testing afterInsert in Grails

Simply, I have the following scenario:
class OwnedRights {
static belongsTo = [comp: Competition]
#Transactional
def afterInsert() {
// Erroring out here.
Event.findAllByComp(comp).each { event ->
// Do something with event.
}
}
}
When I attempt to save in my unit test with something like the following:
def ownedRights = new OwnedRights(params).save(flush: true, failOnError: true)
I am seeing the following stacktrace:
java.lang.NullPointerException
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:130)
at org.codehaus.groovy.grails.orm.support.GrailsTransactionTemplate.execute(GrailsTransactionTemplate.groovy:85)
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:209)
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:194)
This lead to me to this Jira Issue which indicated that I was not using the #Mock annotation, however in my test, I am mocking all used domain classes:
#Mock([OwnedRights, Sport, Competition, Event])
Is this a known problem with hibernate events containing GORM logic?
Attempts to Solve
I have attempted to override the afterInsert() and beforeDelete methods by using the metaClass:
OwnedRights.metaClass.afterInsert = null;
OwnedRights.metaClass.beforeDelete = null;
and
OwnedRights.metaClass.afterInsert = {};
OwnedRights.metaClass.beforeDelete = {};
Both of which had no effect on the outcome. If I comment out the afterInsert event, the save works perfectly.
You are mentioning #Mock. This can only be used in unit tests, but then you should never try to test transactions in unit tests.
That's a no-no. Test transactions in an integration test. The work they did in allowing you to have an in-memory GORM implementation is wonderful, but transactions are a database feature. You just can't expect an in-memory implementation (backed by a Map!!) to behave like a database. What you are testing is the behavior of the GORM implementation for unit tests, not your actual database. So the test is useless in the real world.
I used this annotation (in unit test file) to solve the problem.
#TestFor(FooBarService)
And yes, it's possible to test Service layer using unit tests (but some of GORM methods may not be available, depending of your Grails version)

How to use "Pex and Moles" library with Entity Framework?

This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx

MEF and unit testing with NUnit

A few weeks ago I jumped on the MEF (ComponentModel) bandwagon, and am now using it for a lot of my plugins and also shared libraries. Overall, it's been great aside from the frequent mistakes on my part, which result in frustrating debugging sessions.
Anyhow, my app has been running great, but my MEF-related code changes have caused my automated builds to fail. Most of my unit tests were failing simply because the modules I was testing were dependent upon other modules that needed to be loaded by MEF. I worked around these situations by bypassing MEF and directly instantiating those objects.
In other words, via MEF I would have something like
[Import]
public ICandyInterface ci { get; set; }
and
[Export(typeof(ICandyInterface))]
public class MyCandy : ICandyInterface
{
[ImportingConstructor]
public MyCandy( [Import("name_param")] string name) {}
...
}
But in my unit tests, I would just use
CandyInterface MyCandy = new CandyInterface( "Godiva");
In addition, the CandyInterface requires a connection to a database, which I have worked around by just adding a test database to my unit test folder, and I have NUnit use that for all of the tests.
Ok, so here are my questions regarding this situation:
Is this a Bad Way to do things?
Would you recommend composing parts in [SetUp]
I haven't yet learned how to use mocks in unit testing -- is this a good example of a case where I might want to mock the underlying database connection (somehow) to just return dummy data and not really require a database?
If you've encountered something like this before, can you offer your experience and the way you solved your problem? (or should this go into the community wiki?)
It sounds like you are on the right track. A unit test should test a unit, and that's what you do when you directly create instances. If you let MEF compose instances for you, they would tend towards integration tests. Not that there's anything wrong with integration tests, but unit tests tend to be more maintainable because you test each unit in isolation.
You don't need a container to wire up instances in unit tests.
I generally recommend against composing Fixtures in SetUp, as it leads to the General Fixture anti-pattern.
It is best practice to replace dependencies with Test Doubles. Dynamic mocks is one of the more versatile ways of doing this, so definitely something you should learn.
I agree that creating the DOCs manually is much better than using MEF composition container to satisfy imports, but regarding the note 'compositing fixtures in setup leads to the general fixture anti pattern' - I want to mention that that's not always the case.
If you’re using the static container and satisfy imports via CompositionInitializer.SatisfyImports you will have to face the general fixture anti pattern as CompositionInitializer.Initialize cannot be called more than once. However, you can always create CompositionContainer, add catalogs, and call SatisyImportOnce on the container itself. In that case you can use a new CompositionContainer in every test and get away with facing the shared/general fixture anti pattern
I blogged on how to do unit tests (not nunit but works just the same) with MEF.
The trick was to use a MockExportProvider and i created a test base for all my tests to inherit from.
This is my main AutoWire function that works for integration and unit tests:
protected void AutoWire(MockExportProvider mocksProvider, params Assembly[] assemblies){
CompositionContainer container = null;
var assCatalogs = new List<AssemblyCatalog>();
foreach(var a in assemblies)
{
assCatalogs.Add(new AssemblyCatalog(a));
}
if (mocksProvider != null)
{
var providers = new List<ExportProvider>();
providers.Add(mocksProvider); //need to use the mocks provider before the assembly ones
foreach (var ac in assCatalogs)
{
var assemblyProvider = new CatalogExportProvider(ac);
providers.Add(assemblyProvider);
}
container = new CompositionContainer(providers.ToArray());
foreach (var p in providers) //must set the source provider for CatalogExportProvider back to the container (kinda stupid but apparently no way around this)
{
if (p is CatalogExportProvider)
{
((CatalogExportProvider)p).SourceProvider = container;
}
}
}
else
{
container = new CompositionContainer(new AggregateCatalog(assCatalogs));
}
container.ComposeParts(this);
}
More info on my post: https://yoavniran.wordpress.com/2012/10/18/unit-testing-wcf-and-mef/

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}