Maximizing test coverage and minimizing overlap/duplication - unit-testing

What are people's strategies for maximizing test coverage while minimizing test duplication and overlap, particularly between unit tests and functional or integration tests? The issue is not specific to any particular language or framework, but just as an example, say you have a Rails app that allows users to post comments. You might have a User model that looks something like this:
class User < ActiveRecord::Base
def post_comment(attributes)
comment = self.comments.create(attributes)
notify_friends('created', comment)
share_on_facebook('created', comment)
share_on_twitter('created', comment)
award_badge('first_comment') unless self.comments.size > 1
end
def notify_friends(action, object)
friends.each do |f|
f.notifications.create(subject: self, action: action, object: object)
end
end
def share_on_facebook(action, object)
FacebookClient.new.share(subject: self, action: action, object: object)
end
def share_on_twitter(action, object)
TwitterClient.new.share(subject: self, action: action, object: object)
end
def award_badge(badge_name)
self.badges.create(name: badge_name)
end
end
As an aside, I would actually use service objects rather put this type of application logic in models, but I wrote the example this way just to keep it simple.
Anyway, unit testing the post_comment method is pretty straightforward. You would write tests to assert that:
The comment gets created with the given attributes
The user's friends receive notifications about the user creating the comment
The share method is called on instance of FacebookClient, with the expected hash of params
Ditto for TwitterClient
The user gets the 'first_comment' badge when this is the user's first comment
The user doesn't get the 'first_comment' badge when he/she has previous comments
But then how do you write your functional and/or integration tests to ensure the controller actually invokes this logic and produces the desired results in all of the different scenarios?
One approach is just to reproduce all of the unit test cases in the functional and integration tests. This achieves good test coverage but makes the tests extremely burdensome to write and maintain, especially when you have more complex logic. This does not seem like a viable approach for even a moderately complex application.
Another approach is to just test that the controller invokes the post_comment method on the user with the expected params. Then you can rely on the unit test of post_comment to cover all of the relevant test cases and verify the results. This seems like an easier way to achieve the desired coverage, but now your tests are coupled with the specific implementation of the underlying code. Say you find that your models have gotten bloated and difficult to maintain and you refactor all of this logic into a service object like this:
class PostCommentService
attr_accessor :user, :comment_attributes
attr_reader :comment
def initialize(user, comment_attributes)
#user = user
#comment_attributes = comment_attributes
end
def post
#comment = self.user.comments.create(self.comment_attributes)
notify_friends('created', comment)
share_on_facebook('created', comment)
share_on_twitter('created', comment)
award_badge('first_comment') unless self.comments.size > 1
end
private
def notify_friends(action, object)
self.user.friends.each do |f|
f.notifications.create(subject: self.user, action: action, object: object)
end
end
def share_on_facebook(action, object)
FacebookClient.new.share(subject: self.user, action: action, object: object)
end
def share_on_twitter(action, object)
TwitterClient.new.share(subject: self.user, action: action, object: object)
end
def award_badge(badge_name)
self.user.badges.create(name: badge_name)
end
end
Maybe the actions like notifying friends, sharing on twitter, etc. would logically be refactored into their own service objects too. Regardless of how or why you refactor, your functional or integration test would now need to be rewritten if it was previously expecting the controller to call post_comment on the User object. Also, these types of assertions can get pretty unwieldy. In the case of this particular refactoring, you would now have to assert that the PostCommentService constructor is invoked with the appropriate User object and comment attributes, and then assert that the post method is invoked on the returned object. This gets messy.
Also, your test output is a lot less useful as documentation if the functional and integration tests describe implementation rather than behavior. For example, the following test (using Rspec) is not that helpful:
it "creates a PostCommentService object and executes the post method on it" do
...
end
I would much rather have tests like this:
it "creates a comment with the given attributes" do
...
end
it "creates notifications for the user's friends" do
...
end
How do people solve this problem? Is there another approach that I'm not considering? Am I going overboard in trying to achieve complete code coverage?

I'm talking from a .Net/C# perspective here, but I think its generally applicable.
To me a unit test just tests the object under test, and not any of the dependencies. The class is tested to make sure it communicates correctly with any dependencies using Mock objects to verify the appropriate calls are made, and it handles returned objects in the correct manner (in other words the class under test is isolated). In the example above, this would mean mocking the facebook/twitter interfaces, and checking communication with the interface, not the actual api calls themselves.
It looks like in your original unit test example above you are talking about testing all of the logic (i.e. Post to facebook, twitter etc...) in the same tests. I would say if these tests are written in this way, its actually a functional test already. Now if you absolutely cannot modify the class under test at all, writing a unit test at this point would be unnecessary duplication. But if you can modify the class under test, refactoring out so dependencies are behind interfaces, you could have a set of unit tests for each individual object, and a smaller set of functional tests that test the whole system together appears to behave correctly.
I know you said regardless of how they are refactored above, but to me, refactoring and TDD go hand in hand together. To try to do TDD or unit testing without refactoring is an unnecessarily painful experience, and leads to a more difficult design to change, and maintain.

Related

What is the controller allowed to assume about what it recieves from a service?

Quick terminology question that's somewhat related to my main question: What is the correct term for a model class and the term for a instance of that class?
I am learning Test Driven Development, and want to make sure I am learning it the right so I can form good habits.
My current project has a SalesmanController, which is pretty much a basic resource controller. Here's my current issue (I can get it working, but I want to make sure its done as "right" as possible)
I have a 'Salesman' model.
The 'Salesman' is mapped as having many 'Sales' using my ORM.
The 'Sales' model is mapped as belongsTo 'Salesman' using my ORM.
I have created a ORMSalesmanRepository which implements the SalesmanRepositoryInterface.
My controller has SalesmanRepositoryInterface passed to it upon construction(constructor dependency injection).
My controller calls the find method on the SalesmanRepositoryInterface implementation it has been given to find the correct salesman.
My view needs information on the salesman and information on all 'Sales' records that belong to him.
The current implementation of SalesmanRepositoryInterface returns an instance of a ORMRecord, and then passes that to the view, which retrieves the sales from the ORMRecord.
My gut tells me this implementation is wrong. The orm record implements Array Access, so it still behaves like an array as far as the view knows.
However, when trying to go back and implement my unit tests I am running into issues mocking my dependencies. (I now know with TDD I'm supposed to make my unit tests and then develop the actual implementation, didn't figure this out till recently).
Salesman = MockedSalesman;
SalesRecords = MockedSalesman->Sales;
Is it poor programming to expect my Salesman to return a ORMObject for the controller to use (for chaining relationships maybe?) or is a controller becoming to 'fat' if I'm allowing it to call more than just basic get methods (using arrayaccess []) on the ORMObject? Should the controller just assume that whatever it gets back is an array (or at least acts like one?)
Also, should it ever come up where something one of my mocked classes returns needs to be mocked again?
Thanks in advance everybody.
What is the correct term for a model class and the term for a instance of that class?
Depends on what you mean with "model classes"? Technically model is a layer, that contains several groups of classes. Most notable ones would be: mappers, services an domain objects. Domain objects as whole are implementation of accumulated knowledge about business requirements, insight from specialists and project goals. This knowledge is referred to as "domain model".
Basically, there is no such thing as "model class". There are classes that are part of model.
What is the controller allowed to assume about what it recieves from a service?
Nothing, because controller should not receive anything from model layer. The responsibility of controller is to alter the state of model layer (and in rare cases - state of current view).
Controller is NOT RESPONSIBLE for:
gather data from model layer,
initializing views
passing data from model layer to views
dealing with authorization checks
The current implementation of SalesmanRepositoryInterface returns an instance of a ORMRecord, and then passes that to the view, which retrieves the sales from the ORMRecord.
It sounds like you are implementing active record pattern. It has very limited use-case, where is is appropriate to use AR - when object mostly consists of getters and setters tht are directly stored in a single table. For anything beyond that active record becomes an anti-pattern because it violates SRP and you loose the ability to test your domain logic without database.
Also, are repository should be returning an instance of domain object and makes sure that you are not retrieving data repeatedly. Repositories are not factories for active record instances.
Is it poor programming to expect my Salesman to return a ORMObject for the controller to use (for chaining relationships maybe?) or is a controller becoming to 'fat' if I'm allowing it to call more than just basic get methods (using arrayaccess []) on the ORMObject?
Yes, it's bad code. Your application logic (one that would usually be contained in services) is leaking in the presentation layer.
At this stage I wouldn't stress too much about your implementation - if you try and write proper tests, you'll quickly find out what works and what doesn't.
The trick is to think hard about what each component is trying to achieve. What is your controller method supposed to do? Most likely it is intended to create a ViewModel of some kind, and then choose which View to render. So there's a few tests right there:
When I call my controller method with given arguments (ShowSalesmanDetail(5))
It should pick the correct View to render ('ShowSalesmanDetail')
It should construct the ViewModel that I expect (A Salesman object with some Sales)
In theory, at this point you don't care how the controller constructs the model, only that it does. In practice though you do need to care, because the controller has dependencies (the big one being the database), which you need to cater for. You've chosen to abstract this with a Repository class that talks to an ORM, but that shouldn't impact the purpose of your tests (though it will definitely alter how you implement those tests).
Ideally the Salesman object in your example would be a regular class, with no dependencies of its own. This way, your repository can construct a Salesman object by populating it from the database/ORM, and your unit tests can also use Salesman objects that you've populated yourself with test data. You shouldn't need to mock your models or data classes.
My personal preference is that you don't take your 'data entities' (what you get back from your ORM) and put them into Views. I would construct a ViewModel class that is tied to one View, and then map from your data entities to your ViewModel. In your example, you might have a Salesman class which represents the data in the database, then a SalesmanModel which represents the information you are displaying on the page (which is usually a subset of what's in the db).
So you might end up with a unit test looking something like this:
public void CallingShowSalesmanShouldReturnAValidModel()
{
ISalesmanRepository repository = A.Fake<ISalesmanRepository>();
SalesmanController controller = new SalesmanController(repository);
const int salesmanId = 5;
Salesman salesman = new Salesman
{
Id = salesmanId,
Name = "Joe Bloggs",
Address = "123 Sesame Street",
Sales = new[]
{
new Sale { OrderId = 123, SaleDate = DateTime.Now.AddMonths(-1) }
}
};
A.CallTo(() => repository.Find(salesmanId)).Returns(salesman);
ViewResult result = controller.ShowSalesman(salesmanId) as ViewResult;
SalesmanModel model = result.Model as SalesmanModel;
Assert.AreEqual(salesman.Id, model.Id);
Assert.AreEqual(salesman.Name, model.Name);
SaleModel saleModel = model.Sales.First();
Assert.AreEqual(salesman.Sales.First().OrderId, saleModel.OrderId);
}
This test is by no means ideal but hopefully gives you an idea of the structure. For reference, the A.Fake<> and A.CallTo() stuff is from FakeItEasy, you could replace that with your mocking framework of choice.
If you were doing proper TDD, you wouldn't have started with the Repository - you'd have written your Controller tests, probably got them passing, and then realised that having all this ORM/DB code in the controller is a bad thing, and refactored it out. The same approach should be taken for the Repository itself and so on down the dependency chain, until you run out of things that you can mock (the ORM layer most likely).

What are the best practices for testing "different layers" in Django? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm NOT new to testing, but got really confused with the mess of recommendations for testing different layers in Django.
Some recommend (and they are right) to avoid Doctests in the model as they are not maintainable...
Others say don't use fixtures, as they are less flexible than helper functions, for instance..
There are also two groups of people who fight for using Mock objects. The first group believe in using Mock and isolating the rest of the system, while another group prefer to Stop Mocking and start testing..
All I have mentioned above, were mostly in regards to testing models. Functional testing is an another story (using test.Client() VS webTest VS etc. )
Is there ANY maintainable, extandible and proper way for testing different layers??
UPDATE
I am aware of Carl Meyer's talk at PyCon 2012..
UPDATE 08-07-2012
I can tell you my practices for unit testing that are working pretty well for my own ends and I'll give you my reasons:
1.- Use Fixtures only for information that is necessary for testing but is not going to change, for example, you need a user for every test you do so use a base fixture to create users.
2.- Use a factory to create your objects, I personally love FactoryBoy (this comes from FactoryGirl which is a ruby library). I create a separate file called factories.py for every app where I save all these objects. This way I keep off the test files all the objects I need which makes it a lot more readable and easy to maintain. The cool thing about this approach is that you create a base object that can be modified if you want to test something else based on some object from the factory. Also it doesn't depend on django so when I migrated these objects when I started using mongodb and needed to test them, everything was smooth. Now after reading about factories it's common to say "Why would I want to use fixtures then". Since these fixtures should never change all the extra goodies from factories are sort of useless and django supports fixtures very well out of the box.
3.- I Mock calls to external services, because these calls make my tests very slow and they depend on things that have nothing to do with my code being right or wrong. for example, if I tweet within my test, I do test it to tweet rightly, copy the response and mock that object so it returns that exact response every time without doing the actual call. Also sometimes is good to test when things go wrong and mocking is great for that.
4.- I use an integration server (jenkins is my recommendation here) which runs the tests every time I push to my staging server and if they fail it sends me an email. This is just great since it happens to me a lot that I break something else in my last change and I forgot to run the tests. It also gives you other goodies like a coverage report, pylint/jslint/pep8 verifications and there exists a lot of plugins where you can set different statistics.
About your question for testing front end, django comes with some helper functions to handle this in a basic way.
This is what I personally use, you can fire gets, posts, login the user, etc. that's enough for me. I don't tend to use a complete front end testing engine like selenium since I feel it's an overkill to test anything else besides the business layer. I am sure some will differ and it always depends on what you are working on.
Besides my opinion, django 1.4 comes with a very handy integration for in-browser frameworks.
I'll set an example app where I can apply this practices so it is more understandable. Let's create a very basic blog app:
structure
blogger/
__init__.py
models.py
fixtures/base.json
factories.py
tests.py
models.py
from django.db import models
class Blog(models.Model):
user = models.ForeignKey(User)
text = models.TextField()
created_on = models.DateTimeField(default=datetime.now())
fixtures/base.json
[
{
"pk": 1,
"model": "auth.user",
"fields": {
"username": "fragilistic_test",
"first_name": "demo",
"last_name": "user",
"is_active": true,
"is_superuser": true,
"is_staff": true,
"last_login": "2011-08-16 15:59:56",
"groups": [],
"user_permissions": [],
"password": "IAmCrypted!",
"email": "test#email.com",
"date_joined": "1923-08-16 13:26:03"
}
}
]
factories.py
import factory
from blog.models import User, Blog
class BlogFactory(factory.Factory):
FACTORY_FOR = Blog
user__id = 1
text = "My test text blog of fun"
tests.py
class BlogTest(TestCase):
fixtures = ['base'] # loads fixture
def setUp(self):
self.blog = BlogFactory()
self.blog2 = BlogFactory(text="Another test based on the last one")
def test_blog_text(self):
self.assertEqual(Blog.objects.filter(user__id=1).count(), 2)
def test_post_blog(self):
# Lets suppose we did some views
self.client.login(username='user', password='IAmCrypted!')
response = self.client.post('/blogs', {'text': "test text", user='1'})
self.assertEqual(response.status, 200)
self.assertEqual(Blog.objects.filter(text='test text').count(), 1)
def test_mocker(self):
# We will mock the datetime so the blog post was created on the date
# we want it to
mocker = Mock()
co = mocker.replace('datetime.datetime')
co.now()
mocker.result(datetime.datetime(2012, 6, 12))
with mocker:
res = Blog.objects.create(user__id=1, text='test')
self.assertEqual(res.created_on, datetime.datetime(2012, 6, 12))
def tearDown(self):
# Django takes care of this but to be strict I'll add it
Blog.objects.all().delete()
Notice I am using some specific technology for the sake of the example (which haven't been tested btw).
I have to insist, this may not be the standard best practice (which I doubt it exists) but it is working pretty well for me.
I really like the suggestions from #Hassek and want to stress out what an excellent point he makes about the obvious lack of standard practices, which holds true for many of Django's aspects, not just testing, since all of us approach the framework with different concerns in mind, also adding to that the great degree of flexibility we have with designing our applications, we often end up with drastically different solutions that are applicable to the same problem.
Having said that, though, most of us still strive for many of the same goals when testing our applications, mainly:
Keeping our test modules neatly organized
Creating reusable assertion and helper methods, helper functions that reduce the LOC for test methods, to make them more compact and readable
Showing that there is an obvious, systematic approach to how the application components are tested
Like #Hassek, these are my preferences that may directly conflict with the practices that you may be applying, but I feel it's nice to share the things we've proven that work, if only in our case.
No test case fixtures
Application fixtures work great, in cases you have certain constant model data you'd like to guarantee to be present in the database, say a collection of towns with their names and post office numbers.
However, I see this as an inflexible solution for providing test case data. Test fixtures are very verbose, model mutations force you to either go through a lengthy process of reproducing the fixture data or to perform tedious manual changes and maintaining referential integrity is difficult to manually perform.
Additionally, you'll most likely use many kinds of fixtures in your tests, not just for models: you'd like to store the response body from API requests, to create fixtures that target NoSQL database backends, to write have fixtures that are used to populate form data, etc.
In the end, utilizing APIs to create data is concise, readable and it makes it much easier to spot relations, so most of us resort to using factories for dynamically creating fixtures.
Make extensive use of factories
Factory functions and methods are preferable to stomping out your test data. You can create helper factory module-level functions or test case methods that you may want to either reuse
across application tests or throughout the whole project. Particularly, factory_boy, that #Hassek mentions, provides you with the ability to inherit/extend fixture data and do automatic sequencing, which might look a bit clumsy if you'd do it by hand otherwise.
The ultimate goal of utilizing factories is to cut down on code-duplication and streamline how you create test data. I cannot give you exact metrics, but I'm sure if you go through your test methods with a discerning eye you will notice that a large portion of your test code is mainly preparing the data that you'll need to drive your tests.
When this is done incorrectly, reading and maintaining tests becomes an exhausting activity. This tends to escalate when data mutations lead to not-so-obvious test failures across the board, at which point you'll not be able to apply systematic refactoring efforts.
My personal approach to this problem is to start with a myproject.factory module that creates easy-to-access references to QuerySet.create methods for my models and also for any objects I might regularly use in most of my application tests:
from django.contrib.auth.models import User, AnonymousUser
from django.test import RequestFactory
from myproject.cars.models import Manufacturer, Car
from myproject.stores.models import Store
create_user = User.objects.create_user
create_manufacturer = Manufacturer.objects.create
create_car = Car.objects.create
create_store = Store.objects.create
_factory = RequestFactory()
def get(path='/', data={}, user=AnonymousUser(), **extra):
request = _factory.get(path, data, **extra)
request.user = user
return request
def post(path='/', data={}, user=AnonymousUser(), **extra):
request = _factory.post(path, data, **extra)
request.user = user
return request
This in turn allows me to do something like this:
from myproject import factory as f # Terse alias
# A verbose, albeit readable approach to creating instances
manufacturer = f.create_manufacturer(name='Foomobiles')
car1 = f.create_car(manufacturer=manufacturer, name='Foo')
car2 = f.create_car(manufacturer=manufacturer, name='Bar')
# Reduce the crud for creating some common objects
manufacturer = f.create_manufacturer(name='Foomobiles')
data = {name: 'Foo', manufacturer: manufacturer.id)
request = f.post(data=data)
view = CarCreateView()
response = view.post(request)
Most people are rigorous about reducing code duplication, but I actually intentionally introduce some whenever I feel it contributes to test comprehensiveness. Again, the goal with whichever approach you take to factories is to minimize the amount of brainfuck you introduce into the header of each test method.
Use mocks, but use them wisely
I'm a fan of mock, as I've developed an appreciation for the author's solution to what I believe was the problem he wanted to address. The tools provided by the package allow you to form test assertions by injecting expected outcomes.
# Creating mocks to simplify tests
factory = RequestFactory()
request = factory.get()
request.user = Mock(is_authenticated=lamda: True) # A mock of an authenticated user
view = DispatchForAuthenticatedOnlyView().as_view()
response = view(request)
# Patching objects to return expected data
#patch.object(CurrencyApi, 'get_currency_list', return_value="{'foo': 1.00, 'bar': 15.00}")
def test_converts_between_two_currencies(self, currency_list_mock):
converter = Converter() # Uses CurrencyApi under the hood
result = converter.convert(from='bar', to='foo', ammount=45)
self.assertEqual(4, result)
As you can see, mocks are really helpful, but they have a nasty side effect: your mocks clearly show your making assumptions on how it is that your application behaves, which introduces coupling. If Converter is refactored to use something other than the CurrencyApi, someone may not obviously understand why the test method is suddenly failing.
So with great power comes great responsibility--if your going to be a smartass and use mocks to avoid deeply rooted test obstacles, you may completely obfuscate the true nature of your test failures.
Above all, be consistent. Very very consistent
This is the most important point to be made. Be consistent with absolutely everything:
how you organize code in each of your test modules
how you introduce test cases for your application components
how you introduce test methods for asserting the behavior of those components
how you structure test methods
how you approach testing common components (class-based views, models, forms, etc.)
how you apply reuse
For most projects, the bit about how your collaboratively going to approach testing is often overlooked. While the application code itself looks perfect--adhering to style guides, use of Python idioms, reapplying Django's own approach to solving related problems, textbook use of framework components, etc.--no one really makes it an effort to figure out how to turn test code into a valid, useful communication tool and it's a shame if, perhaps, having clear guidelines for test code is all it takes.

Unit Testing basic Controllers

I have a number of simple controller classes that use Doctrine's entity manager to retrieve data and pass it to a view.
public function indexAction() {
$pages = $this->em->getRepository('Model_Page')->findAll();
$this->view->pages = $pages;
}
What exactly should we be testing here?
I could test routing on the action to ensure that's configured properly
I could potentially test that the appropriate view variables are being set, but this is cumbersome
The findAll() method should probably be in a repository layer which can be tested using mock data, but then this constitutes a different type of test and brings us back to the question of
What should we be testing as part of controller tests?
Controller holds core logic for your application. Though simple "index" controller actions don't have any specific functions, those that verify/actively use data and generate viewmodels have pretty much the most functionality of the system.
For example, consider login form. Whenever the login data is posted, controller should validate login/password and return: 1) to index page whenever logins are good. Show welcome,{user} text. 2) to the login page saying that login is not found in db. 3) to the login page saying that password is not found in db.
These three types of outputs make perfect test cases. You should validate that correct viewmodels/views are being sent back to the client with the appropriate actions.
You shouldn't look at a controller like at something mysterious. It's just another code piece, and it's tested as any other code - any complicated logic that gives business-value to the user should be tested.
Also, I'd recommend using acceptance testing with framework like Cucumber to generate meaningful test cases.
Probably the controller is the hardest thing to test, as it has many dependencies. In theory you should test it in full isolation, but as you already seen - it has no sense.
Probably you should start with functional or acceptance test. It tests your controller action in a whole. I agree with previous answer, that you should try acceptance testing tools. But Cucumber is for Ruby, for PHP you can try using Codeception. It makes tests simple and meaningful.
Also on a Codeception page there is an article on how to test sample controllers.

Django tests reliant on other pages/behaviour

I've started writing some tests for my Django app and I'm unsure how best to structure the code.
Say I have a register page and a page for logged in users only.
My first plan was to have an earlier method perform the register and a later method use that login to test the page:
def test_register_page(self):
//send request to register page and check user has been registered correctly
def test_restricted_page(self):
c = Client();
c.login("someUser","pass");
c.post("/someRestrictedPage/");
//Test response
However this means that now one of my tests rely on the other.
The alternatives I see are calling register in setUp() but this still means that the restricted page test relies on the register page working.
I could try creating a new user manually in setup which I also don't like because this isn't testing a user created by the system.
What is the usual pattern for testing this kind of situation?
You are trying to mix together a lot of different functionalities in one test case. A clean design would be having one test case
for user registration and
one for the view.
Having them depend on each other will introduce a lot of dependencies between them - and - if the test fails the error will be even harder to debug. The success of the registration test should be determined through the correct creation of the user instance (so check necessary attributes etc of the user) and not through being able to login on a certain page. Therefore you will need to set up a "correct" user instance for the view test case. This may seem a bit more complicated than necessary, but it will make future maintainance a lot easier.
What you are trying to do is more something like an integration test, which tests a whole system, but before that you should split up your system in functional units and do unit tests on this units!
The smaller and well-defined the single tests are, the easier will be their maintainance and debugging.

How to deal with setUp() addiction when writing tests?

I'm somewhat new to writing tests. I've find myself struggling with keeping my setUp's clean and concise, instead trying to accomplish too much with an uber-setUp.
My question is, how do you split up your testing?
Do your tests include one or two lines of independent step code?
def test_public_items():
item1 = PublicItem()
item2 = PublicItem()
assertEqual(public_items, [item1, item2])
or do you factor that into the setUp no matter what?
If that's the case, how do you deal with test class separation? Do you create a new class when one set of tests needs a different setUp then another set of tests?
I believe you've hit a couple of anti-patterns here
Excessive Setup
Inappropriately shared fixture.
The rule of thumb is that all tests in a particular test fixture should need the code in the Setup() method.
If you write a test() that needs more or less setup that what is currently present, it may be a hint that it belongs to a new test fixture. Inertia to create a new test fixture.. is what snowballs the setup code into one big ball of mud - trying to do everything for all tests. Hurts readability quite a bit.. you can't see the test amid the setup code, most of which may not even be relevant for the test you're looking at.
That said it is okay to have a test have some specific setup instructions right in the test over the common setup. (That belongs to the first of the Arrange-Act-Assert triad). However if you have duplication of those instructions in multiple tests - you should probably take all those tests out to a new test fixture, whose
setup_of_new_fixture = old_setup + recurring_arrange_instruction
Yes, a text fixture (embodied in a class) should exactly be a set of tests sharing common needs for set-up and tear-down.
Ideally, a unit test should be named testThisConditionHolds. public_items is not a "condition". Wherever the incredibly-black-magic public_items is supposed to come from I'd be writing tests like:
def testNoPublicItemsRecordedIfNoneDefined(self):
assertEqual([], public_items)
def testOnePublicItemsIsRecordedRight(self):
item = PublicItem()
assertEqual([item], public_items)
def testTwoPublicItemsAreRecordedRight(self):
item1 = PublicItem()
item2 = PublicItem()
assertEqual([item1, item2], public_items)
If public_items is a magical list supposed to be magically populated as a side effect of calling a magic function PublicItem, then calling the latter in setUp would in fact destroy the ability to test these simple cases properly, so of course I wouldn't do it!