How to deal with setUp() addiction when writing tests? - unit-testing

I'm somewhat new to writing tests. I've find myself struggling with keeping my setUp's clean and concise, instead trying to accomplish too much with an uber-setUp.
My question is, how do you split up your testing?
Do your tests include one or two lines of independent step code?
def test_public_items():
item1 = PublicItem()
item2 = PublicItem()
assertEqual(public_items, [item1, item2])
or do you factor that into the setUp no matter what?
If that's the case, how do you deal with test class separation? Do you create a new class when one set of tests needs a different setUp then another set of tests?

I believe you've hit a couple of anti-patterns here
Excessive Setup
Inappropriately shared fixture.
The rule of thumb is that all tests in a particular test fixture should need the code in the Setup() method.
If you write a test() that needs more or less setup that what is currently present, it may be a hint that it belongs to a new test fixture. Inertia to create a new test fixture.. is what snowballs the setup code into one big ball of mud - trying to do everything for all tests. Hurts readability quite a bit.. you can't see the test amid the setup code, most of which may not even be relevant for the test you're looking at.
That said it is okay to have a test have some specific setup instructions right in the test over the common setup. (That belongs to the first of the Arrange-Act-Assert triad). However if you have duplication of those instructions in multiple tests - you should probably take all those tests out to a new test fixture, whose
setup_of_new_fixture = old_setup + recurring_arrange_instruction

Yes, a text fixture (embodied in a class) should exactly be a set of tests sharing common needs for set-up and tear-down.
Ideally, a unit test should be named testThisConditionHolds. public_items is not a "condition". Wherever the incredibly-black-magic public_items is supposed to come from I'd be writing tests like:
def testNoPublicItemsRecordedIfNoneDefined(self):
assertEqual([], public_items)
def testOnePublicItemsIsRecordedRight(self):
item = PublicItem()
assertEqual([item], public_items)
def testTwoPublicItemsAreRecordedRight(self):
item1 = PublicItem()
item2 = PublicItem()
assertEqual([item1, item2], public_items)
If public_items is a magical list supposed to be magically populated as a side effect of calling a magic function PublicItem, then calling the latter in setUp would in fact destroy the ability to test these simple cases properly, so of course I wouldn't do it!

Related

Best way to setup test cases that are "late" in a process

Best way of handling many test cases needing to navigate to a particular place before they run their asserts? For example a process has 5 steps and a test case needs to test a part of step 5, how can I set it up? Call the test case methods of the previous steps inside this test case, and do that for all test cases that test step 5?
Similarly, if a test case goes deep into the website - through many pages - should that navigation be re-written for every test case, or just call some test that already does that?
Any tips on this situations?
Best way of handling many test cases needing to navigate to a particular place before they run their asserts? For example a process has 5 steps and a test case needs to test a part
of step 5, how can I set it up?
I would create a Transporter class / pattern that the test case can call to get to that state. That will make the navigation code reusable to other tests and not make the test too big/complicated. You can also use the setUp() methods in your xUnit testing frameworks which will be called before each test and place the navigator code there, if you need to use it for other tests.
Similarly, if a test case goes deep into the website - through many
pages - should that navigation be re-written for every test case, or
just call some test that already does that?
I would extract that code into a helper class called Transporter and have the tests call it to easily navigate to the deep page in one method call.
I wrote about this and other test design patterns in a conference paper at the Pacific Northwest Software Quality Conference. Look for the Transporter pattern in that paper.
Here's an example using a Transporter class where you have to login and navigate to the checkout page:
public class Transporter {
public static void login() {
//App specific code to navigate to login screen and login to the application
}
public static void gotoCheckout() {
//App specific code to navigate to the checkout page
}
}
Now your tests can just call this Transporter class to do the navigation for them.
If you are using BDD, such as JBehave (not sure if Cucumber has the same feature) where you have the Given, When, Then story (feature) structure in Gherkin syntax, you can actually use "GivenStories" feature which are like prequel test cases to set you up for your specific test case, exactly as you are describing.
There's nothing wrong, however, when using BDD to simply make multiple step scenarios leading up to the particular test case, i.e. first scenario logs-in, second scenario navigates to certain page, third scenario performs your actual test.
By writing it as a separate story (feature), however, you can re-use those as "GivenStories" in JBehave as a shortcut to get where you need to be without duplicating the steps.

Maximizing test coverage and minimizing overlap/duplication

What are people's strategies for maximizing test coverage while minimizing test duplication and overlap, particularly between unit tests and functional or integration tests? The issue is not specific to any particular language or framework, but just as an example, say you have a Rails app that allows users to post comments. You might have a User model that looks something like this:
class User < ActiveRecord::Base
def post_comment(attributes)
comment = self.comments.create(attributes)
notify_friends('created', comment)
share_on_facebook('created', comment)
share_on_twitter('created', comment)
award_badge('first_comment') unless self.comments.size > 1
end
def notify_friends(action, object)
friends.each do |f|
f.notifications.create(subject: self, action: action, object: object)
end
end
def share_on_facebook(action, object)
FacebookClient.new.share(subject: self, action: action, object: object)
end
def share_on_twitter(action, object)
TwitterClient.new.share(subject: self, action: action, object: object)
end
def award_badge(badge_name)
self.badges.create(name: badge_name)
end
end
As an aside, I would actually use service objects rather put this type of application logic in models, but I wrote the example this way just to keep it simple.
Anyway, unit testing the post_comment method is pretty straightforward. You would write tests to assert that:
The comment gets created with the given attributes
The user's friends receive notifications about the user creating the comment
The share method is called on instance of FacebookClient, with the expected hash of params
Ditto for TwitterClient
The user gets the 'first_comment' badge when this is the user's first comment
The user doesn't get the 'first_comment' badge when he/she has previous comments
But then how do you write your functional and/or integration tests to ensure the controller actually invokes this logic and produces the desired results in all of the different scenarios?
One approach is just to reproduce all of the unit test cases in the functional and integration tests. This achieves good test coverage but makes the tests extremely burdensome to write and maintain, especially when you have more complex logic. This does not seem like a viable approach for even a moderately complex application.
Another approach is to just test that the controller invokes the post_comment method on the user with the expected params. Then you can rely on the unit test of post_comment to cover all of the relevant test cases and verify the results. This seems like an easier way to achieve the desired coverage, but now your tests are coupled with the specific implementation of the underlying code. Say you find that your models have gotten bloated and difficult to maintain and you refactor all of this logic into a service object like this:
class PostCommentService
attr_accessor :user, :comment_attributes
attr_reader :comment
def initialize(user, comment_attributes)
#user = user
#comment_attributes = comment_attributes
end
def post
#comment = self.user.comments.create(self.comment_attributes)
notify_friends('created', comment)
share_on_facebook('created', comment)
share_on_twitter('created', comment)
award_badge('first_comment') unless self.comments.size > 1
end
private
def notify_friends(action, object)
self.user.friends.each do |f|
f.notifications.create(subject: self.user, action: action, object: object)
end
end
def share_on_facebook(action, object)
FacebookClient.new.share(subject: self.user, action: action, object: object)
end
def share_on_twitter(action, object)
TwitterClient.new.share(subject: self.user, action: action, object: object)
end
def award_badge(badge_name)
self.user.badges.create(name: badge_name)
end
end
Maybe the actions like notifying friends, sharing on twitter, etc. would logically be refactored into their own service objects too. Regardless of how or why you refactor, your functional or integration test would now need to be rewritten if it was previously expecting the controller to call post_comment on the User object. Also, these types of assertions can get pretty unwieldy. In the case of this particular refactoring, you would now have to assert that the PostCommentService constructor is invoked with the appropriate User object and comment attributes, and then assert that the post method is invoked on the returned object. This gets messy.
Also, your test output is a lot less useful as documentation if the functional and integration tests describe implementation rather than behavior. For example, the following test (using Rspec) is not that helpful:
it "creates a PostCommentService object and executes the post method on it" do
...
end
I would much rather have tests like this:
it "creates a comment with the given attributes" do
...
end
it "creates notifications for the user's friends" do
...
end
How do people solve this problem? Is there another approach that I'm not considering? Am I going overboard in trying to achieve complete code coverage?
I'm talking from a .Net/C# perspective here, but I think its generally applicable.
To me a unit test just tests the object under test, and not any of the dependencies. The class is tested to make sure it communicates correctly with any dependencies using Mock objects to verify the appropriate calls are made, and it handles returned objects in the correct manner (in other words the class under test is isolated). In the example above, this would mean mocking the facebook/twitter interfaces, and checking communication with the interface, not the actual api calls themselves.
It looks like in your original unit test example above you are talking about testing all of the logic (i.e. Post to facebook, twitter etc...) in the same tests. I would say if these tests are written in this way, its actually a functional test already. Now if you absolutely cannot modify the class under test at all, writing a unit test at this point would be unnecessary duplication. But if you can modify the class under test, refactoring out so dependencies are behind interfaces, you could have a set of unit tests for each individual object, and a smaller set of functional tests that test the whole system together appears to behave correctly.
I know you said regardless of how they are refactored above, but to me, refactoring and TDD go hand in hand together. To try to do TDD or unit testing without refactoring is an unnecessarily painful experience, and leads to a more difficult design to change, and maintain.

Unit testing style question: should the creation and deletion of data be in the same method?

I am writing unit tests for a PHP class that maintains users in a database. I now want to test if creating a user works, but also if deleting a user works. I see multiple possibilities to do that:
I only write one method that creates a user and deletes it afterwards
I write two methods. The first one creates the user, saves it's ID. The second one deletes that user with the saved ID.
I write two methods. The first one only creates a user. The second method creates a user so that there is one that can afterwards be deleted.
I have read that every test method should be independent of the others, which means the third possibility is the way to go, but that also means every method has to set up its test data by itself (e.g. if you want to test if it's possible to add a user twice).
How would you do it? What is good unit testing style in this case?
Two different things = Two tests.
Test_DeleteUser() could be in a different test fixture as well because it has a different Setup() code of ensuring that a User already exists.
[SetUp]
public void SetUp()
{
CreateUser("Me");
Assert.IsTrue( User.Exists("Me"), "Setup failed!" );
}
[Test]
public void Test_DeleteUser()
{
DeleteUser("Me");
Assert.IsFalse( User.Exists("Me") );
}
This means that if Test_CreateUser() passes and Test_DeleteUser() doesn't - you know that there is a bug in the section of the code that is responsible for deleting users.
Update: Was just giving some thought to Charlie's comments on the dependency issue - by which i mean if Creation is broken, both tests fail even though Delete. The best I could do was to move a guard check so that Setup shows up in the Errors and Failures tab; to distinguish setup failures (In general cases, setup failures should be easy to spot by an entire test-fixture showing Red.)
How you do this codependent on how you utilize Mocks and stubs. I would go for the more granular approach so having 2 different tests.
Test A
CreateUser("testuser");
assertTrue(CheckUserInDatabase("testuser"))
Test B
LoadUserIntoDB("testuser2")
DeleteUser("testuser2")
assertFalse(CheckUserInDatabase("testuser2"))
TearDown
RemoveFromDB("testuser")
RemoveFromDB("testuser2")
CheckUserInDatabase(string user)
...//Access DAL and check item in DB
If you utilize mocks and stubs you don't need to access the DAL until you do your integration testing so won't need as much work done on the asserting and setting up the data
Usually, you should have two methods but reality still wins over text on paper in the following case:
You need a lot of expensive setup code to create the object to test. This is a code smell and should be fixed but sometimes, you really have no choice (think of some code that aggregates data from several places: You really need all those places). In this case, I write mega tests (where a test case can have thousands of lines of code spread over many methods). It creates the database, all tables, fills them with defined data, runs the code step by step, verifies each step.
This should be a rare case. If you need one, you must actively ignore the rule "Tests should be fast". This scenario is so complex that you want to check as many things as possible. I had a case where I would dump the contents of 7 database tables to files and compare them for each of the 15 SQL updates (which gave me 105 files to compare in a single test) plus about a million asserts that would run.
The goal here is to make the test fail in such a way that you notice the source of the problem right away. It's like pouring all the constraints into code and make them fail early so you know which line of app code to check. The main drawback is that these test cases are hell to maintain. Every change of the app code means that you'll have to update many of the 105 "expected data" files.

At what level should I unit test?

Let's say in my user model I have a ChangePassword method. Given an already initialised user model, it takes the new password as a parameter and does the database work to make the magic happen. The front end to this is a web form, where the user enters their current password and their desired new password. The controller then checks to see if the user's current password is correct. If so, it invokes the user model's ChangePassword method. If not, it displays an error to the user.
From what I hear you're supposed to unit test the smallest piece of code possible, but doing that in this case completely ignores the check to make sure the user entered the correct current password. So what should I do?
Should I:
A) Unit test only from the controller, effectively testing the model function too?
OR
B) Create 2 different tests; one for the controller and one for the model?
When in doubt, test both. If you only test the controller and the test fails, you don't know whether the issue is in the controller or the model. If you test both, then you know where the problem lies by looking at the model's test result - if it passes, the controller is at fault, if it fails, then the model is at fault.
A)
The test fails. You have a problem in either the model or the controller, or both and spend time searching through the model and controller.
B)
The model and controller tests fail... chances are you have a problem in the model.
Only the controller test fails... chances are better that the problem is not in the model, only in the controller.
Only the model test fails... hard to see this happening, but if it does somehow then you know the problem is in the model, not in the controller.
It's good to test both layers. It'll make finding the problem later that much easier.
There should be multiple tests here:
Verify the correct password was entered.
Validate the new password, e.g. doesn't match existing one, has minimum length, sufficient complexity, tests for errors thrown, etc.
Updating the database to the new password.
Don't forget that the tests can also help act as documentation of the code in a sense so that it becomes clear for what each part of the code is there.
You might want to consider another option: Mock objects. Using these, you can test the controller without the model, which can result in faster test execution and increased test robustness (if the model fails, you know that the controller still works). Now you have two proper unit tests (both testing only a single piece of code each), and you can still add an integration test if required.
Unit testing means to test every unit on its own, so in this case you would need to build two unit tests, one for the frontend and one for the backend.
To test the combination of both an integration test is needed, at least the ITSQB calls it like that.
If you code object oriented you usually build unit tests for every class as that is the smallest independent unit testable.
A) is not a unit test in my opinion since it uses more than one class (or layer). So you should really be unit-testing the model only.

Can a fixture be changed dynamically between test methods in CakePHP?

Is it possible to have a fixture change between test methods? If so, how can I do this?
My syntax for this problem :
In the cakephp framework i am building tests for a behavior that is configured by adding fields to the table. This is intended to work in the same way that adding the "created"
and "modified" fields will auto-populate these fields on save.
To test this I could create dozens of fixtures/model combos to test the different setups, but it would be a hundred times better, faster and easier to just have the fixture change "shape" between test methods.
If you are not familiar with the CakePHP framework, you can maybe still help me as it uses SimpleTest
Edit: rephrased question to be more general
I'm not familiar specifically with CakePHP, but this kind of thing seems to happen anywhere with fixtures.
There is no built in way in rails at least for this to happen, and I imagine not in cakePHP or anywhere else either because the whole idea of a fixture, is that it is fixed
There are 2 'decent' workarounds I'm aware of
Write a changefixture method, and just before you do your asserts/etc, run it with the parameters of what to change. It should go and update the database or whatever needs to be done.
Don't use fixtures at all, and use some kind of object factory or object generator to create your objects each time
This is not an answer to my quetion, but a solution to my issue example.
Instead of using multiple fixtures or changing the fixtures, I edit the Model::_schema arrays by removing the fields that I wanted to test without. This has the effect that the model acts as if the fields was not there, but I am unsure if this is a 100% test. I do not think it is for all cases, but it works for my example.