Inconsistent unit tests - failing in test suite, passing separated - unit-testing

I have a unit tests for Zend Framework controllers extending Zend_Test_PHPUnit_ControllerTestCase.
The tests are dispatching an action, which forwards to another action, like this:
// AdminControllerTest.php
public testAdminAction()
$this->dispath('/admin/index/index');
// forwards to login page
$this->assertModule('user');
$this->assertController('profile');
$this->assertController('login');
$this->assertResponseCode(401);
}
// NewsControllerTest.php
public testIndexAction()
{
$this->dispatch('/news/index/index');
$this->assertModule('news');
$this->assertController('index');
$this->assertController('index');
$this->assertResponseCode(200);
}
Both of the tests are passing when they are run as a seperate tests.
When I run them in the same test suite, the second one fails.
Instead dispatching /news/index/index the previous request is dispatched (user module).
How to trace this bug? Looks like I have some global state somewhere in the application, but I'm unable do debug this. How can I dump the objects between the tests in the suite? setUpBefore/AfterClass are static, so there are no so many data about the object instances.
I know this is a kind of guess what question. It's hard to provide reliable data here, because they would took to much place, so feel free to ask for details.
The whole unit test setup is more or less like described in: Testing Zend Framework MVC Applications - phly, boy, phly or Testing Zend Framework Controllers « Federico Cargnelutti.
Solution:
I've determined the issue (after a little nap). The problem was not in unit test setup, but in the tested code.
I use different ACL objects based on module name. Which one to use was determined by static call to action helper, which cached the result in a private static variable to speed things up. This cache was executed only when run in a test suite. I just need more unit tests for this code :)
(I'm sorry for such a rubbish post, but I've stuck with this for a day and I hoped someone else experienced similar kind of this Heisenbug with unit tests in general)

You may try clearingrequest and response objects before dispatching each action, like this:
$this->resetRequest()
->resetResponse()
->dispatch('/news/index/index');

Related

Does django testing have a breakDown() similar to setUp() that is ran after all tests complete?

I am running some tests in django but they depend on a response from an outside service. For instance,
I might create a customer and wish to acknowledge it has been created in the outside service.
Once I am done with testing I want to remove any test customers from the outside service.
Ideally, there would be a method similar to setUp() that runs after all tests have completed.
Does anything like this exist?
You can make use of either unittest.TestCase.tearDown or unittest.TestCase.tearDownClass
tearDown(...) is the method gets called immediately after the test method has been called and the result recorded.
but, the tearDownClass(...) is gets called after tests in an individual class have run. That is, once per test class.
IMO, using tearDownClass(...) method is more appropriate since you may not need to check/acknoledge the external service after search test cases of the same class
So Django's testing framework uses a Python standard library module, unittest. This is where the setUp() method comes from.
This library contains another method tearDown() that is called immediately after the tests are run. More info can be found here

How to determinate the scope of a test

How i can determinate what is the best way of test a feature:
by example
I have a email validation that is made by a function that returns an object with errors, i should test this as a unit test and test this function or should create an e2e that test the ui show the proper validation message?
You could do both. Each of them tests different things.
In this case you should start with unit tests.
You want to get (quick) feedback from the test and testing the function directly is the best feedback you can get. If the tests fails you are pretty sure it is related to the function under test. With unit tests you can easily test all paths and error conditions of the function.
The e2e test checks that all the layers (which in the end call your function) work together. If it fails there could be an issue on any(!) layer. It would be more work to find the cause of the problem. That's why you do not test the details of the function from an e2e test. Why involve all the layers if you can check the function directly?
You then write more unit(!) tests on the ui side using stub data for you object. You don't call the function. Pass the stub data to the ui and check that it is handled correctly depending on the stub data. For example you could test the success case without error and another with error and so on.
Having done that you know that the function works and that the ui handles the result of the function.
Now you could write e2e tests.
If e2e tests make sense heavily depends on your context. In my experience that's a lot of work with little return. This may be different in your context. I fear you have to find out for yourself if it pays off :-)

Should I mock all the direct dependencies of a class in unit tests?

Let's say that we have a class Controller that depends on a class Service, and the Service class depends on a class Repository. Only Repository communicates with an external system (say DB) and I know it should be mocked when unit testing is executed.
My question: For unit tests, should I mock the Service class when Controller class is tested even though the Service class doesn't depend on any external systems directly? and Why?
It depends on the kind of test you are writing: integration test or unit test.
I assume you want to write a unit test in this case. The purpose of a unit test is to solely test the business logic of your class so every other dependency should be mocked.
In this case you will mock the Service class. Doing so also allows you to prepare for testing certain scenarios based on the input you are passing to a certain method of Service. Imagine you have a Person findPerson(Long personID)-method in your Service. When testing your Controller you are not interested in doing everything that's necessary for having Service actually return the right output. For a certain test scenario for your Controller you just want it to return a Person whereas for a different test scenario you don't want it to return anything. Mocking makes this very easy.
Also note that if you mock your Service you don't have to mock Repository since your Service is already a mock.
TLDR; When writing a unit test for a certain class, just mock every other dependency to be able to manipulate the output of method invocations made to these dependencies.
Yes, mock Services when testing controllers. Unit tests help to identify the location of a regression. So a Test for a Service code should only fail if the service code has changed, not if the controller code has changed. That way, when the service test fails, you know for sure that the root cause lies in a change to Service.
Also, usually it is much easier to mock the service than to mock all repositories invoked by the service just to test the controller. So it makes your tests easier to maintain.
But in general, you may keep certain util classes unmocked, as you loose more than you gain by mocking those. Also see:
https://softwareengineering.stackexchange.com/questions/148049/how-to-deal-with-static-utility-classes-when-designing-for-testability
As with all engineering questions, TDD is no different. The answer always is, "it depends". There's always trade offs.
In the case of TDD, you develop the test through behavioral expectations first. In my experiences, a behavioral expectation is a unit.
An example, say you want to get all users that start with the last name 'A', and they are active in the system. So you would write a test to create a controller action to get active users that start with 'A' public ActionResult GetAllActiveUsersThatStartWithA().
In the end, I might have something like this:
public ActionResultGetAllActiveUsersThatStartWithA()
{
var users = _repository.GetAllUsers();
var activeUsersThatStartWithA = users.Where(u => u.IsActive && u.Name.StartsWith('A');
return View(activeUsersThatStartWithA);
}
This to me is a unit. I can then now refactor (change me implementation without changing behavior by adding a service class with the method below)
public IEnumerable<User> GetActiveUsersThatStartWithLetter(char startWith)
{
var users = _repository.GetAllUsers();
var activeUsersThatStartWithA = users.Where(u => u.IsActive && u.Name.StartsWith(startsWith);
}
And my new implementation of the controller becomes
public ActionResultGetAllActiveUsersThatStartWithA()
{
return View(_service.GetActiveUsersThatStartWithLetter('A');
}
This is obviously a very contrived example, but it gives an idea of my point. The main benefit of doing it this way is that my tests aren't tied to any implementations details except the repository. Whereas, if I mocked out the service in my tests, I am now tied to that implementation. If for whatever reason that service layer is removed, all my tests break. I would find it more likely that the service layer is more volatile to change than the repository layer.
The other thing to think about is if I do mock out service in my controller class, I could run into a scenario where all my tests are working properly, but the only way I find out the system is broken is through an integration test (meaning that out of process, or assembly components interact with each other), or through production issues.
If for instance I change the implementation of the service class to below:
public IEnumerable<User> GetActiveUsersThatStartWithLetter(char startsWith)
{
throw new Exception();
}
Again, this is a very contrived example, but the point is still relevant. I would not catch this with my controller tests, hence it looks like the system is behaving properly with my passing "unit tests", but in reality the system is not working at all.
The downside with my approach is that the tests could become very cumbersome to setup. So the tradeoff is balancing out test complexity with abstraction/mockable implementations.
Key thing to keep in mind is that TDD gives the benefit of catching regressions, but it's main benefit to is to help design a system. In other words, don't let the design dictate the tests you write. Let the tests dictate the functionality of the system first, then worry about the design through refactoring.

VS2012 - Disable parallel test runs

I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).

How does one unit test sections of code that are procedural or event-based

I'm convinced from this presentation and other commentary here on the site that I need to learn to Unit Test. I also realize that there have been many questions about what unit testing is here. Each time I go to consider how it should be done in the application I am currently working on, I walk away confused. It is a xulrunner application application, and a lot of the logic is event-based - when a user clicks here, this action takes place.
Often the examples I see for testing are testing classes - they instantiate an object, give it mock data, then check the properties of the object afterward. That makes sense to me - but what about the non-object-oriented pieces?
This guy mentioned that GUI-based unit testing is difficult in most any testing framework, maybe that's the problem. The presentation linked above mentions that each test should only touch one class, one method at a time. That seems to rule out what I'm trying to do.
So the question - how does one unit testing procedural or event-based code? Provide a link to good documentation, or explain it yourself.
On a side note, I also have a challenge of not having found a testing framework that is set up to test xulrunner apps - it seems that the tools just aren't developed yet. I imagine this is more peripheral than my understanding the concepts, writing testable code, applying unit testing.
The idea of unit testing is to test small sections of code with each test. In an event based system, one form of unit testing you could do, would be to test how your event handlers respond to various events. So your unit test might set an aspect of your program into a specific state, then call the event listener method directly, and finally test the subsequent state of of your program.
If you plan on unit testing an event-based system, you will make your life a lot easier for yourself if you use the dependency injection pattern and ideally would go the whole way and use inversion of control (see http://martinfowler.com/articles/injection.html and http://msdn.microsoft.com/en-us/library/aa973811.aspx for details of these patterns)
(thanks to pc1oad1etter for pointing out I'd messed up the links)
At first I would test events like this:
private bool fired;
private void HandlesEvent(object sender, EventArgs e)
{
fired = true;
}
public void Test()
{
class.FireEvent += HandlesEvent;
class.PErformEventFiringAction(null, null);
Assert.IsTrue(fired);
}
And Then I discovered RhinoMocks.
RhinoMocks is a framework that creates mock objects and it also handles event testing. It may come in handy for your procedural testing as well.
Answering my own question here, but I came across an article that take explains the problem, and does a walk-through of a simple example -- Agile User Interface Development. The code and images are great, and here is a snippet that shows the idea:
Agile gurus such as Kent Beck and
David Astels suggest building the GUI
by keeping the view objects very thin,
and testing the layers "below the
surface." This "smart object/thin
view" model is analogous to the
familiar document-view and
client-server paradigms, but applies
to the development of individual GUI
elements. Separation of the content
and presentation improves the design
of the code, making it more modular
and testable. Each component of the
user interface is implemented as a
smart object, containing the
application behavior that should be
tested, but no GUI presentation code.
Each smart object has a corresponding
thin view class containing only
generic GUI behavior. With this design
model, GUI building becomes amenable
to TDD.
The problem is that "event based programming" links far too much logic to the events. The way such a system should be designed is that there should be a subsystem that raises events (and you can write tests to ensure that these events are raised in the proper order). And there should be another subsystem that deals only with managing, say, the state of a form. And you can write a unit test that will verify that given the correct inputs (ie. events being raised), will set the form state to the correct values.
Beyond that, the actual event handler that is raised from component 1, and calls the behavior on component 2 is just integration testing which can be done manually by a QA person.
Your question doesn't state your programming language of choice, but mine is C# so I'll exemplify using that. This is however just a refinement over Gilligans answer by using anonymous delegates to inline your test code. I'm all in favor of making tests as readable as possible, and to me that means all test code within the test method;
// Arrange
var car = new Car();
string changedPropertyName = "";
car.PropertyChanged += delegate(object sender, PropertyChangedEventArgs e)
{
if (sender == car)
changedPropertyName = e.PropertyName;
};
// Act
car.Model = "Volvo";
// Assert
Assert.AreEqual("Model", changedPropertyName,
"The notification of a property change was not fired correctly.");
The class I'm testing here implements the INotifyPropertyChanged interface and therefore a NotifyPropertyChanged event should be raised whenever a property's value has changed.
An approach I've found helpful for procedural code is to use TextTest. It's not so much about unit testing, but it helps you do automated regression testing. The idea is that you have your application write a log then use texttest to compare the log before and after your changes.
See the oft-linked Working Effectively with Legacy Code. See the sections titled "My Application Is All API Calls" and "My Project is Not Object-Oriented. How Do I Make Safe Changes?".
In C/C++ world (my experience) the best solution in practice is to use the linker "seam" and link against test doubles for all the functions called by the function under test. That way you don't change any of the legacy code, but you can still test it in isolation.