Should unit tests test all passing conditions as well as all failing conditions?
For example, imagine I have a test Widget_CannotActiveWidgetIfStateIsCancelled.
And let's say there are 100 possible states.
Can I get away with testing only that I cannot activate my widget when State == Cancelled, or do I have to also test that I CAN activate it in each of the other 99 states?
Is there some compromise that can let me avoid spending all my time writing tests? :)
It seems you are asking whether your tests should be exhaustive: whether you should test for all possible states. The answer is a resounding no, for the simple reason that even simple code can have far too many states. Even small programs can have more potential states than can be tested even if you used all the time there has been since the big bang.
You should instead use equivalence partitioning: identify groups of states, such that all the states in a group are likely to have similar behaviour, then have one test case per group.
If you do that, you might discover you need only two test cases.
This is a scenario where you want to use one parametrized test which gets all 99 values as input.
Using xUnit.net, this could look like this (untested, might contain small compilation errors):
[Fact]
public void Widget_CannotActiveWidgetIfStateIsCancelled()
{
// Arrange ...
sut.State = State.Cancelled;
Assert.False(sut.CanActivate);
}
[Theory, ValidStatesData]
public void Widget_CanActivateWidgetIfStateIsNotCancelled(State state)
{
// Arrange ...
sut.State = state;
Assert.True(sut.CanActivate);
}
private class ValidStatesDataAttribute : DataAttribute
{
public override IEnumerable<object[]> GetData(
MethodInfo methodUnderTest, Type[] parameterTypes)
{
return Enum.GetValues(typeof(State))
.Cast<State>()
.Except(new [] { State.Cancelled })
.Select(x => new object[] { x });
}
}
If you're using NUnit you can use attributes so you only have to code one test but can test all 100 values.
Related
I have a a number of tests based upon reflection. The reflective test work on a given assembly for a give type, i.e. look in the service.dll for everything based upon IService for example. I have done it this why because I need to pipe reference types into the tests that I can't do with TestCase attribution. The code basically looks like the following:
public static void TestRunnerforTypeList(Type baseType, Func<Type, string, bool> excluder, Action<Type, string> mockResolver, Dictionary<Type, Dictionary<Type, Mock>> mocks, Action<dynamic, Type> assertor, string methodName)
{
foreach (var type in GetTypesToTest(baseType))
{
if (excluder(type, methodName)) continue;
dynamic objectToTest = CreateInstance(type, mocks);
mockResolver(type, methodName);
assertor(objectToTest, type);
}
}
A call to this would look like the following:
[Test]
public void Positive_outcome_for_Get()
{
GeneralTestRunner.TestRunnerforTypeList(typeof(IService<,,>),
_serviceFactoryContext.ExcludeTypeForMethod,
_serviceFactoryContext.ResolvePositiveMockSetup,
_mocks,
(service, type) => Assert.IsNotNull(service.Get(1)),
"Get");
}
It's simple assertion but you get the idea. This way I get the benefit of TestCase attribution but with reference types, like mocks, being piped in.
However I have other places where I use TestCase attibution and Resharper picks up these up and increases the number of test in the test session, the refective ones don't.
My question is there a way of telling Resharper (or nUnit) that the number has increased every time the above assertor action is called?
Thanks in advance ;)
The only way I've found to do this is with Resharper 6 and dropping the reflective aspect and use a TestCaseSource attribute. The TestCaseSource would point to a static IEnumerable yielding a factory reference type for each test. See documentation below:
http://www.nunit.org/index.php?p=testCaseSource&r=2.5
The only issue compared to the question implementation is the reflective test will fail if programmer doesn't create mocks for the instance creator, however the type is still picked up (if it's in the same assembly). Therefore if a lazy programmer creates a new type based upon IService (for example) with no tests, if fails, out the box.
I am interested in the best way to write unit tests for a class whose public API involves some kind of a flow, for example:
public class PaginatedWriter {
public void AppendLine(string line) { ... }
public IEnumerable<string> GetPages() { ... }
public int LinesPerPage { get; private set; }
}
This class paginates text lines into the given number of lines per page. In order to test this class, we may have something like:
public void AppendLine_EmptyLine_AddsEmptyLine() { ... }
public void AppendLine_NonemptyLine_AddsLine() { ... }
public void GetPages_ReturnsPages() {
writer.AppendLine("abc");
writer.AppendLine("def");
var output = writer.GetPages();
...
}
Now, my question is: is it OK to make calls to AppendLine() in the last test method, even though we are testing the GetPages() method?
I know one solution in such situations is to make AppendLine() virtual and override it but the problem is that AppendLine() manipulates internal state which I don't think should be the business of the unit test.
The way I see it is that tests usually follow a pattern like 'setup - operate - check - teardown'.
I concentrate most of the common setup and teardown in the respective functions.
But for test specific setup and teardown it is part of the test method.
I see nothing wrong with preparing the state of the Object Under Test using method calls of that object. In OOP I would not try to decouple the state from the operations since the paradigm goes to great lengths to unify them and if possible even hide the state. In my view the unit under test is the Class - state and methods.
I do make visual distinction in the code by separating the setup block from the operate block and the verify block with an empty line.
Yes, I'd say that's absolutely fine.
Test in whatever way you find practical. I find there's rather too much dogma around testing exactly one thing in each test method. That's a lovely ideal, but sometimes it's just not nearly as practical as slightly less pure alternatives.
When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert.
What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example
<root><child/></root>
that will be used to check parent-child relationships between nodes and so on for the rest of expectations.
Now, take a look at follwing Xml:
<root>
<child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1>
<child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2>
</root>
In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?
All you need - is to check one aspect of the SUT (System Under Test) in separate test.
[TestFixture]
public class XmlParserTest
{
[Test, ExpectedException(typeof(XmlException))]
public void FailIfXmlIsNotWellFormed()
{
Parse("<doc>");
}
[Test]
public void ParseShortTag()
{
var doc = Parse("<doc/>");
Assert.That(doc.DocumentElement.Name, Is.EqualTo("doc"));
}
[Test]
public void ParseFullTag()
{
var doc = Parse("<doc></doc>");
Assert.That(doc.DocumentElement.Name, Is.EqualTo("doc"));
}
[Test]
public void ParseInnerText()
{
var doc = Parse("<doc>Text 1</doc>");
Assert.That(doc.DocumentElement.InnerText, Is.EqualTo("Text 1"));
}
[Test]
public void AttributesAreEmptyifThereAreNoAttributes()
{
var doc = Parse("<doc></doc>");
Assert.That(doc.DocumentElement.Attributes, Has.Count(0));
}
[Test]
public void ParseAttribute()
{
var doc = Parse("<doc attribute11='attribute 11 value'></doc>");
Assert.That(doc.DocumentElement.Attributes[0].Name, Is.EqualTo("attribute11"));
Assert.That(doc.DocumentElement.Attributes[0].Value, Is.EqualTo("attribute 11 value"));
}
[Test]
public void ChildNodesInnerTextAtFirstLevel()
{
var doc = Parse(#"<root>
<child1>Text 1</child1>
<child2>Text 2</child2>
</root>");
Assert.That(doc.DocumentElement.ChildNodes, Has.Count(2));
Assert.That(doc.DocumentElement.ChildNodes[0].InnerText, Is.EqualTo("Text 1"));
Assert.That(doc.DocumentElement.ChildNodes[1].InnerText, Is.EqualTo("Text 2"));
}
// More tests
.....
private XmlDocument Parse(string xml)
{
var doc = new XmlDocument();
doc.LoadXml(xml);
return doc;
}
}
Such approach gives lots of advantages:
Easy defect location - if
something wrong with attribute
parsing, then only tests on
attributes will fail.
Small tests are always easier to understand
UPD: See what Gerard Meszaros (Author of xUnit Test Patterns book) says about topic: xunitpatterns
One possibly contentious aspect of
Verify One Condition per Test is what
we mean by "one condition". Some test
drivers insist on one assertion per
test. This insistence may be based on
using a Testcase Class per Fixture
organization of the Test Methods and
naming each test based on what the one
assertion is verifying(E.g.
AwaitingApprovalFlight.validApproverRequestShouldBeApproved.).
Having one assertion per test makes
such naming very easy but it does lead
to many more test methods if we have
to assert on many output fields. Of
course, we can often comply with this
interpretation by extracting a Custom
Assertion (page X) or Verification
Method (see Custom Assertion) that
allows us to reduce the multiple
assertion method calls into one.
Sometimes that makes the test more
readable but when it doesn't, I
wouldn't be too dogmatic about
insisting on a single assertion.
Multiple tests.
Use multiple tests. The same restrictions apply. You should test some normal operational cases, some failing cases, and some edge cases.
In the same way that you can presume that if sum(x, y) works for some values of x it will work with other values, you can presume that if an XML parser can parse a sequence of 2 nodes, it can also parse a sequence of 100 nodes.
To elaborate a bit on Ian's terse answer: make that bit of XML the setup, and have separate individual tests, each with their own assertion. That way you're not duplicating the setup logic, but you still have fine-grained insight into what's wrong with your parser.
Use Nunit Fluent Syntax
Assert.That( someString,
Is.Not.Null
.And.Not.Empty
.And.EqualTo("foo")
.And.Not.EqualTo("bar")
.And.StartsWith("f"));
I had a similar kind of requirement , where i wanted to have 1 Assert for various input sets. please chekout the link below , which i blogged .
Writing better unit tests
This can be applied for your prob as well. Construct a factory class , that contains the logic for constructing 'complex' input sets . The unit test case has only one Assert.
Hope this helps.
Thanks ,
Vijay.
You might want to also use your own assertions (this is taken from your own question):
attribute11 and attribute12 belong to element1
('attribute11 ', 'attribute12').belongsTo('element1');
or
('element1 attribute11').length
The
BTW, this is similar to jQuery. You store this string in an complex graph repository. How would you unit test a very complex graph-connected database?
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}
I was wondering whether the object to test should be a field and thus set up during a SetUp method (ie. JUnit, nUnit, MS Test, …).
Consider the following examples (this is C♯ with MsTest, but the idea should be similar for any other language and testing framework):
public class SomeStuff
{
public string Value { get; private set; }
public SomeStuff(string value)
{
this.Value = value;
}
}
[TestClass]
public class SomeStuffTestWithSetUp
{
private string value;
private SomeStuff someStuff;
[TestInitialize]
public void MyTestInitialize()
{
this.value = Guid.NewGuid().ToString();
this.someStuff = new SomeStuff(this.value);
}
[TestCleanup]
public void MyTestCleanup()
{
this.someStuff = null;
this.value = string.Empty;
}
[TestMethod]
public void TestGetValue()
{
Assert.AreEqual(this.value, this.someStuff.Value);
}
}
[TestClass]
public class SomeStuffTestWithoutSetup
{
[TestMethod]
public void TestGetValue()
{
string value = Guid.NewGuid().ToString();
SomeStuff someStuff = new SomeStuff(value);
Assert.AreEqual(value, someStuff.Value);
}
}
Of course, with just one test method, the first example is much too long, but with more test methods, this could be safe quite some redundant code.
What are the pros and cons of each approach? Are there any “Best Practices”?
It's a slippery slope once you start initializing fields & generally setting up the context of your test within the test method itself. This leads to large test methods and really really unmanageable fixtures that don't explain themselves very well.
Instead, you should look at the BDD style naming & test organization. Make one fixture per context, rather than one fixture per system-under-test. Then your [setup] truly does setup the context, and your tests can be simple one-liner asserts.
It's much easier to read when you see a test output that does this:
OrderFulfillmentServiceTests.cs
with_an_order_from_a_new_customer
it should check their credit from the credit service
it should give no discount
with valid credit check
it should decrement inventory
it should ship the goods
with a customer in texas or california
it should add appropriate sales tax
with an order from a gold customer
it should NOT check credit
it should get expedited shipping added for free
Our tests are now really good documentation for our system. Each "with_an..." is a test fixture, and the items below it are tests. Within those, you setup the context (the state of the world as the class name describes) and then the test does the simple assert that verifies what the method name says it does.
The second approach is much more readable, and much easier to visually trace.
However, the first approach means less repetition.
What I've found is that I tend to use the SetUp to create objects (especially for things with a number of dependencies), and then set the values used in the test itself. From experience, this provides about the right amount of code-reuse versus readability/traceability.
From talking with Kent Beck about the design of jUnit I know that Test Classes were a way to share setup between Tests, so using the common initialization was the intent. However, along with that, that means splitting tests that require different setup into separate test classes that have revealing names.
Personally, I use Setup and Teardown methods for two distinct reasons, although I assume that others will have different reasons.
Use Setup and Teardown methods when there is common initiation logic that is used by all tests and a single instance of the object(s) created in the Setup are designed to be reused.
Use Setup and Teardown methods when the time it takes for creating and destroying any object(s) created takes enough time to slow down the unit testing process when repeated in each TestMethod.
To give you an idea of how often I run accross these scenarios, in a project that I am working on now, only two of my test classes (out of about eighty) have an explicit need for Setup and Teardown methods, both times it was to satisfy my second reason due to the 10 second max I have enabled for each test execution.
I also prefer the readability of having the object(s) created and destroyed within the TestMethod, although it is not a breaking or selling point for me.
The approach I take is somewhere in the middle - I use TearDown and SetUp to create a test "sandbox" directory (and delete it when done), as well as to initialize some test member variables with some default values that will be used to test the classes. I then set up some "helper methods" - One is generally called InstantiateClass() I use that to call with the default parameters (if any) which I can override as necessary in each explicit test.
[Test]
public void TestSomething()
{
_myVar = "value";
InstantiateClass();
RunTheClass();
Assert.IsTrue(this, that);
}
In practice, I find set up methods make it hard to reason about a test that is failing and have to scroll to somewhere near the top of the file (which can be very large) to figure out what collaborator has broken (not easy with mocking) and there is no clickable reference to navigate in your IDE. In short, you lose spatial locality.
Static helper methods reveal the collaborators more explicitly, and you avoid fields which unnecessarily widen the scope of variables.