In order to launch a test based on a fake application using memory database, Play Framework 2 promotes doing this way:
FakeApplication(additionalConfiguration = inMemoryDatabase())
Is it a necessity to precise additionalConfiguration = inMemoryDatabase() if the application.conf already declares a memory database (h2) dedicated for tests?
I guess this additional configuration forces to redeclare a clean memory database for each fake application, rather than using the same for all suite test. Thus involving a full isolation for each one and avoid us to redefine setUp() and tearDown() method to manage it.
What is useful for?
Related
I want to delete every new entry made with the tests in order to make it possible to run it again (both on the CI build or manually) without having to manually delete the database entries made with the previous run of the tests. I have found the tearDown() and tearDownAfterClass() but it seems to be useful only for connecting/disconnecting the link with the database. Can I use it to delete the entries made with the tests?
I want to delete every new entry made with the tests in order to make it possible to run it again (both on the CI build or manually) without having to manually delete the database entries made with the previous run of the tests.
In set-up create an online snapshot of the database and in tear-down reset it to the snapshot point.
Whatever your tests have done to the data in the database or to the structure of it, it is not part of the snapshot which ran before the test and therefore gone.
Please consult your DBA about the options for the database system in use and you might want to bring in a system administrator as well, as it might be possible this is most easily done with a snapshot feature of the file-system and you would be interested to integrate this all with your test-suite.
Yes, you can do it via the methods you mentioned.
tearDown method is enabled after every single test inside the class
tearDownAfterClass method is enabled at the end of all tests inside the class
I would suggest you use a rollback pattern, so
in the setUp method you would initialize the transaction
protected function setUp(): void
{
//...some boilerplate with establish connection
$this->connection->beginTransaction();
}
in tearDown method you would rolling back changes made in tests
protected function tearDown(): void
{
$this->connection->rollback();
}
I'm writing a Gradle plugin that interacts with an external HTTP API. This interaction is handled by a single class (let's call it ApiClient). I'm writing some high-level tests that use Gradle TestKit to simulate an entire build that uses the plugin, but I obviously don't want them to actually hit the API. Instead, I'd like to mock ApiClient and check that its methods have been called with the appropriate arguments, but I'm not sure how to actually inject the mocked version into the plugin. The plugin is instantiated somewhere deep within Gradle, and gets applied to the project being executed using its void apply(Project project) method, so there doesn't appear to be a way to inject a MockApiClient object.
Perhaps one way is to manually instantiate a Project, apply() the plugin to it (at which point, I can inject the mocked object because I have control over plugin instantiation), and then programmatically execute a task on the project, but how can I do that? I've read the Gradle API documentation and haven't seen an obvious way.
A worst-case solution will be to pass in a debug flag through the plugin extension configuration, which the plugin will then use to determine whether it should use the real ApiClient or a mock (which would print some easily grep-able messages to the STDOUT). This isn't ideal, though, since it's more fuzzy than checking the arguments actually passed to the ApiClient methods.
Perhaps you could split your plugin into a few different plugins
my-plugin-common - All the common stuff
my-plugin-real-services - Adds the "real" services to the model (eg RealApiClient)
my-plugin-mock-services - Adds "mock" services to the model (eg MockApiClient)
my-plugin - Applies my-plugin-real-services and my-plugin-common
my-plugin-mock - Applies my-plugin-mock-services and my-plugin-common
In the real world, people will only ever apply: 'my-plugin'
For testing you could apply: 'my-plugin-mock'
Let's say that we have a class Controller that depends on a class Service, and the Service class depends on a class Repository. Only Repository communicates with an external system (say DB) and I know it should be mocked when unit testing is executed.
My question: For unit tests, should I mock the Service class when Controller class is tested even though the Service class doesn't depend on any external systems directly? and Why?
It depends on the kind of test you are writing: integration test or unit test.
I assume you want to write a unit test in this case. The purpose of a unit test is to solely test the business logic of your class so every other dependency should be mocked.
In this case you will mock the Service class. Doing so also allows you to prepare for testing certain scenarios based on the input you are passing to a certain method of Service. Imagine you have a Person findPerson(Long personID)-method in your Service. When testing your Controller you are not interested in doing everything that's necessary for having Service actually return the right output. For a certain test scenario for your Controller you just want it to return a Person whereas for a different test scenario you don't want it to return anything. Mocking makes this very easy.
Also note that if you mock your Service you don't have to mock Repository since your Service is already a mock.
TLDR; When writing a unit test for a certain class, just mock every other dependency to be able to manipulate the output of method invocations made to these dependencies.
Yes, mock Services when testing controllers. Unit tests help to identify the location of a regression. So a Test for a Service code should only fail if the service code has changed, not if the controller code has changed. That way, when the service test fails, you know for sure that the root cause lies in a change to Service.
Also, usually it is much easier to mock the service than to mock all repositories invoked by the service just to test the controller. So it makes your tests easier to maintain.
But in general, you may keep certain util classes unmocked, as you loose more than you gain by mocking those. Also see:
https://softwareengineering.stackexchange.com/questions/148049/how-to-deal-with-static-utility-classes-when-designing-for-testability
As with all engineering questions, TDD is no different. The answer always is, "it depends". There's always trade offs.
In the case of TDD, you develop the test through behavioral expectations first. In my experiences, a behavioral expectation is a unit.
An example, say you want to get all users that start with the last name 'A', and they are active in the system. So you would write a test to create a controller action to get active users that start with 'A' public ActionResult GetAllActiveUsersThatStartWithA().
In the end, I might have something like this:
public ActionResultGetAllActiveUsersThatStartWithA()
{
var users = _repository.GetAllUsers();
var activeUsersThatStartWithA = users.Where(u => u.IsActive && u.Name.StartsWith('A');
return View(activeUsersThatStartWithA);
}
This to me is a unit. I can then now refactor (change me implementation without changing behavior by adding a service class with the method below)
public IEnumerable<User> GetActiveUsersThatStartWithLetter(char startWith)
{
var users = _repository.GetAllUsers();
var activeUsersThatStartWithA = users.Where(u => u.IsActive && u.Name.StartsWith(startsWith);
}
And my new implementation of the controller becomes
public ActionResultGetAllActiveUsersThatStartWithA()
{
return View(_service.GetActiveUsersThatStartWithLetter('A');
}
This is obviously a very contrived example, but it gives an idea of my point. The main benefit of doing it this way is that my tests aren't tied to any implementations details except the repository. Whereas, if I mocked out the service in my tests, I am now tied to that implementation. If for whatever reason that service layer is removed, all my tests break. I would find it more likely that the service layer is more volatile to change than the repository layer.
The other thing to think about is if I do mock out service in my controller class, I could run into a scenario where all my tests are working properly, but the only way I find out the system is broken is through an integration test (meaning that out of process, or assembly components interact with each other), or through production issues.
If for instance I change the implementation of the service class to below:
public IEnumerable<User> GetActiveUsersThatStartWithLetter(char startsWith)
{
throw new Exception();
}
Again, this is a very contrived example, but the point is still relevant. I would not catch this with my controller tests, hence it looks like the system is behaving properly with my passing "unit tests", but in reality the system is not working at all.
The downside with my approach is that the tests could become very cumbersome to setup. So the tradeoff is balancing out test complexity with abstraction/mockable implementations.
Key thing to keep in mind is that TDD gives the benefit of catching regressions, but it's main benefit to is to help design a system. In other words, don't let the design dictate the tests you write. Let the tests dictate the functionality of the system first, then worry about the design through refactoring.
I want to write some unit tests and have to create or mock objects which are used in my program. I want to know if there is any tools that can capture and record the objects of my program during real running and use them for unit testing (Same as Selenium Recorder in End-to-End testing)?
I've never heard of a such tool. You will have to write your own.
You could rely on the persistence layer of your application:
to store objects at some specific point of the application execution,
to load these objects again in your tests.
You might be interrested by tools like databene-benerator which is
It is a framework for generating realistic and valid high-volume test data for your system under test
I had to refactor a large (production) codebase without any tests and unfortunately found nothing, so I had to develop a tool myself. The main objective was extensibility (the ability to plug in at every analysis stage) and quality (high code coverage by unit tests).
So anybody who is in search of such a tool might try out testrecorder. There are several ways how to generate tests from real usage. The easiest is probably using a callsite recorder:
try (CallsiteRecorder recorder = new CallsiteRecorder(ExampleObject.class.getDeclaredMethod("inc"))) {
printTest(recorder.record(() -> {
int doubleInc = exampleObject.inc().inc().get();
System.out.println(doubleInc);
}));
printTest(recorder.record(() -> {
int resetInc = exampleObject.reset().inc().get();
System.out.println(resetInc);
}));
}
I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)
You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');