Teardown logic in Kent Beck's xUnit example - unit-testing

I've been working through Kent Beck's Test-Driven Development By Example, more specifically the xUnit example (Chapters 18-24). I have some questions about the teardown-functionality. Originally, the flow of a test run is implemented as a method run() on a class TestCase in the following way
def run(self):
result = TestResult()
result.testStarted()
self.setUp()
method = self.getattr(self, self.name)
method()
self.tearDown()
return result
When this is done he leaves as an exercise to the reader to change the method run() so the teardown logic is executed even when method() fails.
In the next chapter (Dealing with Failure), however, the method run() gets expanded in order to register a test failure:
def run(self):
result = TestResult()
result.testStarted()
self.setUp()
try:
method = self.getattr(self, self.name)
method()
except:
result.testFailed()
self.tearDown()
return result
After this edit, the item concerning teardown logic is still open on the todo-list but to me the problem seems solved. The except-clause is as general as can be and the operation result.testFailed() will never throw an exception. Therefore it seems to me that, whatever method() may do, the teardown logic will always be executed. I can imagine putting the teardown in a finally-clause to better signify intent and to be a bit more robust against changes in testFailed(), but is this operation (and hence the exercise) redundant when run() has this form?

It is an exercise that is supposed to help you learn. It prepares the stage for "Dealing with Failure".

Related

Passing parameters to tearDown method

Suppose I have entity that creates SVN branch during its work. To perform functional testing I create multiple almost same methods (I use python unittest framework but question relates to any test framework):
class Tester(unittest.TestCase):
def test_valid1_url(self):
url="valid1"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_valid2_url(self):
url="valid2"
BranchCreator().create_branch(url)
self.assertUrlExists(url) # assume I have this method implemented
def test_invalid_url(self):
url="invalid"
self.assertRaises(ValueError, BranchCreator().create_branch, url)
After each test I want to remove the resulting branch or do nothing if test failed. Ideally I would use something like following:
#teardown_params(url='valid1')
def test_valid1_url(self):
def tearDown(self, url):
if (url_exists(url)): remove_branch(url)
But tearDown does not accept any parameter.
I see few quite dirty solutions:
a) create field "used_url" in Tester, set it in every method and use in tearDown:
def test_valid1_url(self):
self.used_url="valid1"
BranchCreator().create_branch(self.used_url)
self.assertUrlExists(url)
...
def tearDown(self):
if (url_exists(self.used_url)): remove_branch(self.used_url)
It should work because (at least in my environment) all tests are run sequentally so there would be no conflicts. But this solution violates tests independency principle due to shared variable, and if I would manage to launch tests simultaneously, it will not work.
b) Use separate method like cleanup(self, url) and call it from every method
Is there any other approach?
I think that the b) solution could work even if it mandates to have the call to the helper method in every test and that sounds to me like a sort of duplication.
Another approach could be the calling the helper method inside the "assertUrlExists" function. In this way the duplication is removed and you can avoid to check again the existence of the URL in order to manage the cleanup: you have the assertion result and you can use it.

Does "unit test only one thing" means one feature or one whole scenario of a unit?

When people say "test only one thing". Does that mean that test one feature at a time or one scenario at a time?
method() {
//setup data
def data = new Data()
//send external webservice call
def success = service.webserviceCall(data)
//persist
if (success) {
data.save()
}
}
Based on the example, do we test by feature of the method:
testA() //test if service.webserviceCall is called properly, so assert if called once with the right parameter
testB() //test if service.webserviceCall succeeds, assert that it should save the data
testC() //test if service.webserviceCall fails, assert that it should not save the data
By scenario:
testA() //test if service.webserviceCall succeeds, so assert if service is called once with the right parameter, and assert that the data should be saved
testB() //test if service.webserviceCall fails, so again assert if service is called once with the right parameter, then assert that it should not save the data
I'm not sure if this is a subjective topic, but I'm trying to do the by feature approach. I got the idea from Roy Osherove's blogs, but I'm not sure if I understood it correct.
It was mentioned there that it would be easier to isolate the errors, but I'm not sure if its overkill. Complex methods will tend to have lots of tests.
(Please excuse my wording on the by feature/scenario, I'm not sure how to word them)
You are right in that this is a subjective topic.
Think about how you want this method to behave, not just on how it's currently implemented. Otherwise your tests will just mirror the production code and will break everytime the implementation changes.
Based on the limited context provided, I'd write the following (separate) tests:
Is the webservice command called with the expected data?
If the command returns successfully, is the data saved? Don't overspecify the arguments provided to your webservice call here, as the previous test covers this.
If it's important that the data is not saved when the command returns a failure, I'd write a third test for this. If it's not important, I wouldn't even bother.
You might have heard the adage "one assert per test". This is good advice in general because a test stops executing as soon as a single assert fails. All asserts further down are not executed. By splitting up the asserts in multiple tests you will receive more feedback when something goes wrong. When tests go red, you know exactly all the asserts that fail and don't have to run through the -fix assertion failure, run tests, fix next assertion failure, repeat- cycle.
So in the terminology you propose, my approach would also be to write a test per feature of the method.
Sidenote: you construct your data object in the method itself and call the save method of that object. How do you sense that the data is saved in your tests?
I understand it like this:
"unit test one thing" == "unit test one behavior"
(After all, it is the behavior that the client wants!)
I would suggest that you approach your testing "one feature at a time". I agree with you where you quoted that with this approach it is "easier to isolate the errors". Roy Osherove really does know what he is talking about especially when it comes to TDD.
In my experience I like to focus on the behaviors that I am trying to test (and I am not particularly referring to BDD here). Essentially I would test each behavior that I am expecting from this code. You said that you are mocking out the dependencies (webservice, and data storage) so I would still class this as a unit test with the following expected behaviors:
a call to this method will result in a particular call to a web service
a successful web service call will result in the data being saved
an unsuccessful web service call will result in the data not being saved
Having tests for these three behaviors will help you isolate any issues with the code immediately.
Your tests should also have no dependency on the actual code written to achieve the behavior. For example, if my implementation called some decorator internal to my class which in turn called the webservice correctly then that should be no concern of my test. My test should only be concerned with the external dependencies and public interface of the class itself.
If I exposed internal methods of my class (or implementation details, such as the decorator mentioned above) for the purposes of testing its particular implementation then I have created brittle tests that will fail when the implementation changes.
In summary, I would recommend that your tests should lock down the behavior of a class and isolate failures to identify the 'unit of behavior' that is failing.
A unit test in general is a test that is done without a call to database or file system or even to that effect doesnot call a webservice either. The idea of a unit test is that if you did not have any internet connection you should be able to unit test. So having said that , if a method calls a webservice or calls a database, then you basically are expected to mock the responses from an external system. You should be testing that unit of work only. As mentioned above by prgmtc on how you should be asserting one assert per method is the way to go.
Second, if you are calling a real webservice or database etc, then consider calling those test as integrated or integration test depending upon what you are trying to test.
In my opinion to get the most out of TDD you want to be doing test first development. Have a look at uncle Bobs 3 Rules of TDD.
If you follow these rules strictly, you end up writing tests that generally only have a single assert statements. In reality you will often find you end up with a number of assert statements that act as a single logical assert as it often helps with the understanding of the unit test itself.
Here is an example
[Test]
public void ValidateBankAccount_GivenInvalidAccountType_ShouldReturnValidationFailure()
{
//---------------Set up test pack-------------------
const string validBankAccount = "99999999999";
const string validBranchCode = "222222";
const string invalidAccountType = "99";
const string invalidAccoutTypeResult = "3";
var bankAccountValidation = Substitute.For<IBankAccountValidation>();
bankAccountValidation.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType)
.Returns(invalidAccoutTypeResult);
var service = new BankAccountCheckingService(bankAccountValidation);
//---------------Assert Precondition----------------
//---------------Execute Test ----------------------
var result = service.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType);
//---------------Test Result -----------------------
Assert.IsFalse(result.IsValid);
Assert.AreEqual("Invalid account type", result.Message);
}
And the ValidationResult class that is returned from the service
public interface IValidationResult
{
bool IsValid { get; }
string Message { get; }
}
public class ValidationResult : IValidationResult
{
public static IValidationResult Success()
{
return new ValidationResult(true,"");
}
public static IValidationResult Failure(string message)
{
return new ValidationResult(false, message);
}
public ValidationResult(bool isValid, string message)
{
Message = message;
IsValid = isValid;
}
public bool IsValid { get; private set; }
public string Message { get; private set; }
}
Note I would have unit tests the ValidationResult class itself, but in the test above I feel it gives more clarity to include both Asserts.

FactoryBoy: how to teardown?

I don't understand how teardown in FactoryBoy + Django works.
I have a testcase like this:
class TestOptOutCountTestCase(TestCase):
multi_db = True
def setUp(self):
TestCase.setUp(self)
self.date = datetime.datetime.strptime('05Nov2014', '%d%b%Y')
OptoutFactory.create(p_id=1, cdate=self.date, email='inv1#test.de', optin=1)
def test_optouts2(self):
report = ReportOptOutsView()
result = report.get_optouts()
self.assertEqual(len(result), 1)
self.assertEqual(result[0][5], -1)
setUp is running once for all tests correct? Now if I had a second test and needed a clean state before running it, how do I achieve this? Thanks
If I understand you correctly you don't need tearDown in this case, as resetting the database between each test is the default behaviour for a TestCase.
See:
At the start of each test case, before setUp() is run, Django will flush the database, returning the database to the state it was in directly after migrate was called.
...
This flush/load procedure is repeated for each test in the test case, so you can be certain that the outcome of a test will not be affected by another test, or by the order of test execution.
Or do you mean to limit the creation of instances via the OutputFactory to certain tests?
Then you probably shouldn’t put the creation of instances into setUp.
Or you create two variants of your TestCase, one for all tests that rely on the factory and one for the ones that don't.
Regarding the uses of tearDown check this answer: Django when to use teardown method

Setup reason codes for setupTestcase in Unit Test Framework [AX 2012]

I am using reason code entries in a Dialog form.
For writing the unit Test for the above, I need to first insert reasonCodes dynamically via code in setUpTestCase in UnitTestFramework in Dynamics AX 2012.
How can I do this? I havnt found any help on the internet yet.
Self learned the answer.
In order to write a Unit Test using UnitTestFramework, you create a class which extends SysTestCase class (a System class).
setUp(), setUpTestCase(), tearDown(), tearDownTestCase() are base class functions which are used for setting up and destroying the data during purposely for the test case.
setUp() & tearDown() methods are called at the start & end respectively for each test function in the test case class.
Note, setUp(), tearDown() is run once for every test function while setUpTestCase(), tearDownTestCase() is run only once for for a unit test at the start and end respectively.
Coming back to what I asked,
I had to setup reason codes together with reason comments for writing the test case.
Following is the X++ code required to do so.
private void createReason(str _reasonCode, str _reasonComment)
{
ReasonTable _reasonTable;
_reasonTable.clear();
_reasonTable.Asset = NoYes::Yes;
_reasonTable.Ledger = NoYes::Yes;
_reasonTable.Reason = _reasonCode;
_reasonTable.Description = _reasonComment;
_reasonTable.doInsert();
}
You might have different setting to setup reasons in your test case.
Example, you might wamt to set
_reasonTable.Asset = NoYes::No;
instead of
_reasonTable.Asset = NoYes::Yes;
Call createReason() function in the setUpTestCase() and reasons are inserted into database.
That's all. Hope that it helps someone at some point of time.
Be happy.
~Shakir Shabbir
Have you tried the setUp() and tearDown() methods on the test class?
http://msdn.microsoft.com/EN-US/library/bb496539.aspx
You can create data before the test class execution and delete it when testing ends.

How do I run my Django testcase multiple times?

I want to perform some exhaustive testing against one of my test-cases (say, create a document, to debug some weird things I am encountering..)
My brutal force was to fire python manage.py test myapp in a loop either using Popen or os.system, but now I am back to pure way ?.....
def SimpleTest(unittest.TestCase):
def setUp(self):
def test_01(self):
def tearDown(self):
def suite():
suite = unittest.TestCase()
suite.add(SimpleTest("setUp"))
suite.add(SimpleTest("test_01"))
suite.add(SimpleTest("tearDown"))
return suite
def main():
for i in range(n):
suite().run("runTest")
I ran python manage.py test myapp and I got
File "/var/lib/system-webclient/webclient/apps/myapps/tests.py", line 46, in suite
suite = unittest.TestCase()
File "/usr/lib/python2.6/unittest.py", line 216, in __init__
(self.__class__, methodName)
ValueError: no such test method in <class 'unittest.TestCase'>: runTest
I've googled the error, but I still clueless (I was told to add an empty runTest method, but that doesn't sound right at all...)
Well, according to python's unittest.TestCase:
The simplest TestCase subclass will simply override the runTest()
method in order to perform specific testing code
As you can see, my whole goal is to run my SimpleTest N times. I need to keep track of pass, failure against N.
What option do I have?
Thanks.
Tracking race conditions via unit tests is tricky. Sometimes you're better off hitting your frontend with automated testing tool like Selenium -- unlike unit test, environment is the same and there's no need for extra work to ensure concurrency. Here's one way to run concurrent code in tests when there're no better option: http://www.caktusgroup.com/blog/2009/05/26/testing-django-views-for-concurrency-issues/
Just keep in mind that concurrent test is no definite proof you're free from race conditions -- there's no guarantee it'll recreate all possible combinations of execution order among processes.