When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert.
What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example
<root><child/></root>
that will be used to check parent-child relationships between nodes and so on for the rest of expectations.
Now, take a look at follwing Xml:
<root>
<child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1>
<child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2>
</root>
In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?
All you need - is to check one aspect of the SUT (System Under Test) in separate test.
[TestFixture]
public class XmlParserTest
{
[Test, ExpectedException(typeof(XmlException))]
public void FailIfXmlIsNotWellFormed()
{
Parse("<doc>");
}
[Test]
public void ParseShortTag()
{
var doc = Parse("<doc/>");
Assert.That(doc.DocumentElement.Name, Is.EqualTo("doc"));
}
[Test]
public void ParseFullTag()
{
var doc = Parse("<doc></doc>");
Assert.That(doc.DocumentElement.Name, Is.EqualTo("doc"));
}
[Test]
public void ParseInnerText()
{
var doc = Parse("<doc>Text 1</doc>");
Assert.That(doc.DocumentElement.InnerText, Is.EqualTo("Text 1"));
}
[Test]
public void AttributesAreEmptyifThereAreNoAttributes()
{
var doc = Parse("<doc></doc>");
Assert.That(doc.DocumentElement.Attributes, Has.Count(0));
}
[Test]
public void ParseAttribute()
{
var doc = Parse("<doc attribute11='attribute 11 value'></doc>");
Assert.That(doc.DocumentElement.Attributes[0].Name, Is.EqualTo("attribute11"));
Assert.That(doc.DocumentElement.Attributes[0].Value, Is.EqualTo("attribute 11 value"));
}
[Test]
public void ChildNodesInnerTextAtFirstLevel()
{
var doc = Parse(#"<root>
<child1>Text 1</child1>
<child2>Text 2</child2>
</root>");
Assert.That(doc.DocumentElement.ChildNodes, Has.Count(2));
Assert.That(doc.DocumentElement.ChildNodes[0].InnerText, Is.EqualTo("Text 1"));
Assert.That(doc.DocumentElement.ChildNodes[1].InnerText, Is.EqualTo("Text 2"));
}
// More tests
.....
private XmlDocument Parse(string xml)
{
var doc = new XmlDocument();
doc.LoadXml(xml);
return doc;
}
}
Such approach gives lots of advantages:
Easy defect location - if
something wrong with attribute
parsing, then only tests on
attributes will fail.
Small tests are always easier to understand
UPD: See what Gerard Meszaros (Author of xUnit Test Patterns book) says about topic: xunitpatterns
One possibly contentious aspect of
Verify One Condition per Test is what
we mean by "one condition". Some test
drivers insist on one assertion per
test. This insistence may be based on
using a Testcase Class per Fixture
organization of the Test Methods and
naming each test based on what the one
assertion is verifying(E.g.
AwaitingApprovalFlight.validApproverRequestShouldBeApproved.).
Having one assertion per test makes
such naming very easy but it does lead
to many more test methods if we have
to assert on many output fields. Of
course, we can often comply with this
interpretation by extracting a Custom
Assertion (page X) or Verification
Method (see Custom Assertion) that
allows us to reduce the multiple
assertion method calls into one.
Sometimes that makes the test more
readable but when it doesn't, I
wouldn't be too dogmatic about
insisting on a single assertion.
Multiple tests.
Use multiple tests. The same restrictions apply. You should test some normal operational cases, some failing cases, and some edge cases.
In the same way that you can presume that if sum(x, y) works for some values of x it will work with other values, you can presume that if an XML parser can parse a sequence of 2 nodes, it can also parse a sequence of 100 nodes.
To elaborate a bit on Ian's terse answer: make that bit of XML the setup, and have separate individual tests, each with their own assertion. That way you're not duplicating the setup logic, but you still have fine-grained insight into what's wrong with your parser.
Use Nunit Fluent Syntax
Assert.That( someString,
Is.Not.Null
.And.Not.Empty
.And.EqualTo("foo")
.And.Not.EqualTo("bar")
.And.StartsWith("f"));
I had a similar kind of requirement , where i wanted to have 1 Assert for various input sets. please chekout the link below , which i blogged .
Writing better unit tests
This can be applied for your prob as well. Construct a factory class , that contains the logic for constructing 'complex' input sets . The unit test case has only one Assert.
Hope this helps.
Thanks ,
Vijay.
You might want to also use your own assertions (this is taken from your own question):
attribute11 and attribute12 belong to element1
('attribute11 ', 'attribute12').belongsTo('element1');
or
('element1 attribute11').length
The
BTW, this is similar to jQuery. You store this string in an complex graph repository. How would you unit test a very complex graph-connected database?
Related
Recently I noticed that my team follows two approaches on how to write tests in Reactor. First one is with help of .block() method. And it looks something like that:
#Test
void set_entity_version() {
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertFalse(entity.isV2());
entityService.setV2(ID)
.block();
Entity entity = entityRepo.findById(ID)
.block();
assertNotNull(entity);
assertTrue(entity.isV2());
}
And the second one is about using of StepVerifier. And it looks something like that:
#Test
void set_entity_version() {
StepVerifier.create(entityRepo.findById(ID))
.assertNext(entity -> {
assertNotNull(entity);
assertFalse(entity.isV2());
})
.verifyComplete();
StepVerifier.create(entityService.setV2(ID)
.then(entityRepo.findById(ID)))
.assertNext(entity -> {
assertNotNull(entity);
assertTrue(entity.isV2());
})
.verifyComplete();
}
In my humble opinion, the second approach looks more reactive I would say. Moreover, official docs are very clear on that:
A StepVerifier provides a declarative way of creating a verifiable script for an async Publisher sequence, by expressing expectations about the events that will happen upon subscription.
Still, I'm really curious, what way should be encouraged to use as the main road for doing testing in Reactor. Should .block() method be abandoned completly or it could be useful in some cases? If yes, what such cases are?
Thanks!
You should use StepVerifier. It allows more options:
Verify that you expect n element in a flux
Verify that the flux/mono complete
Verify that an error is expected
Verify that a sequence is expected n element followed by an error (impossible to test with .block())
From the official doc:
public <T> Flux<T> appendBoomError(Flux<T> source) {
return source.concatWith(Mono.error(new IllegalArgumentException("boom")));
}
#Test
public void testAppendBoomError() {
Flux<String> source = Flux.just("thing1", "thing2");
StepVerifier.create(
appendBoomError(source))
.expectNext("thing1")
.expectNext("thing2")
.expectErrorMessage("boom")
.verify();
}
Create initial context
Using virtual time to manipulate time. So when you have something like Mono.delay(Duration.ofDays(1)) you don't have to wait 1 day for your test to complete.
Expect that no event are emitted for a given duration...
from https://medium.com/swlh/stepverifier-vs-block-in-reactor-ca754b12846b
There are pros and cons of both block() and StepVerifier testing
patterns. Hence, it is necessary to define a pattern or set of rules
which can guide us on how to use StepVerifier and block().
In order to decide which patterns to use, we can try to answer the
following questions which will provide a clear expectation from the
tests we are going to write:
Are we trying to test the reactive aspect of the code or just the output of the code?
In which of the patterns we find clarity based on the 3 A’s of testing i.e Arrange, Act, and Assert, in order to make the test
understandable?
What are the limitations of the block() API over StepVerifier in testing reactive code? Which API is more fluent for writing tests in
case of Exception?
If you try answering all these questions above, you will find the
answers to “what” and “where”. So, just give it a thought before
reading the following answers:
block() tests the output of the code and not the reactive aspect. In such a case where we are concerned about testing the output of
the code, rather than the reactive aspect of the code we can use a
block() instead of StepVerifier as it is easy to write and the tests
are more readable.
The assertion library for a block() pattern is better organised in terms of 3 A’s pattern i.e Arrange, Act, and Assert than
StepVerifier. In StepVerfier while testing a method call for a mock
class or even while testing a Mono output one has to write expectation
in the form of chained methods, unlike assert which in my opinion
decreases the readability of the tests. Also, if you forget to write
the terminal step i.e verify() in case of StepVerifier, the code
will not get executed and the test will go green. So, the developer
has to be very careful about calling verify at end of the chain.
There are some aspects of reactive code that can not be tested by using block() API. In such cases, one should use StepVerifier when we
are testing a Flux of data or subscription delays or subscriptions
on different Schedulers, etc, where the developer is bound to use
StepVerifier.
To verify exception by using block() API you need to use assertThatThrownBy API in assertions library that catches the
exception. With the use of an assertion API, error message and
instance of the exception can be asserted. StepVerifier also provides
assertions on exception by expectError() API and supports the
assertion of the element before errors are thrown in a Flux of
elements that can not be achieved by block(). So, for the assertion of
exception, StepVerifier is better than a block() as it can assert
both Mono/Flux.
When people say "test only one thing". Does that mean that test one feature at a time or one scenario at a time?
method() {
//setup data
def data = new Data()
//send external webservice call
def success = service.webserviceCall(data)
//persist
if (success) {
data.save()
}
}
Based on the example, do we test by feature of the method:
testA() //test if service.webserviceCall is called properly, so assert if called once with the right parameter
testB() //test if service.webserviceCall succeeds, assert that it should save the data
testC() //test if service.webserviceCall fails, assert that it should not save the data
By scenario:
testA() //test if service.webserviceCall succeeds, so assert if service is called once with the right parameter, and assert that the data should be saved
testB() //test if service.webserviceCall fails, so again assert if service is called once with the right parameter, then assert that it should not save the data
I'm not sure if this is a subjective topic, but I'm trying to do the by feature approach. I got the idea from Roy Osherove's blogs, but I'm not sure if I understood it correct.
It was mentioned there that it would be easier to isolate the errors, but I'm not sure if its overkill. Complex methods will tend to have lots of tests.
(Please excuse my wording on the by feature/scenario, I'm not sure how to word them)
You are right in that this is a subjective topic.
Think about how you want this method to behave, not just on how it's currently implemented. Otherwise your tests will just mirror the production code and will break everytime the implementation changes.
Based on the limited context provided, I'd write the following (separate) tests:
Is the webservice command called with the expected data?
If the command returns successfully, is the data saved? Don't overspecify the arguments provided to your webservice call here, as the previous test covers this.
If it's important that the data is not saved when the command returns a failure, I'd write a third test for this. If it's not important, I wouldn't even bother.
You might have heard the adage "one assert per test". This is good advice in general because a test stops executing as soon as a single assert fails. All asserts further down are not executed. By splitting up the asserts in multiple tests you will receive more feedback when something goes wrong. When tests go red, you know exactly all the asserts that fail and don't have to run through the -fix assertion failure, run tests, fix next assertion failure, repeat- cycle.
So in the terminology you propose, my approach would also be to write a test per feature of the method.
Sidenote: you construct your data object in the method itself and call the save method of that object. How do you sense that the data is saved in your tests?
I understand it like this:
"unit test one thing" == "unit test one behavior"
(After all, it is the behavior that the client wants!)
I would suggest that you approach your testing "one feature at a time". I agree with you where you quoted that with this approach it is "easier to isolate the errors". Roy Osherove really does know what he is talking about especially when it comes to TDD.
In my experience I like to focus on the behaviors that I am trying to test (and I am not particularly referring to BDD here). Essentially I would test each behavior that I am expecting from this code. You said that you are mocking out the dependencies (webservice, and data storage) so I would still class this as a unit test with the following expected behaviors:
a call to this method will result in a particular call to a web service
a successful web service call will result in the data being saved
an unsuccessful web service call will result in the data not being saved
Having tests for these three behaviors will help you isolate any issues with the code immediately.
Your tests should also have no dependency on the actual code written to achieve the behavior. For example, if my implementation called some decorator internal to my class which in turn called the webservice correctly then that should be no concern of my test. My test should only be concerned with the external dependencies and public interface of the class itself.
If I exposed internal methods of my class (or implementation details, such as the decorator mentioned above) for the purposes of testing its particular implementation then I have created brittle tests that will fail when the implementation changes.
In summary, I would recommend that your tests should lock down the behavior of a class and isolate failures to identify the 'unit of behavior' that is failing.
A unit test in general is a test that is done without a call to database or file system or even to that effect doesnot call a webservice either. The idea of a unit test is that if you did not have any internet connection you should be able to unit test. So having said that , if a method calls a webservice or calls a database, then you basically are expected to mock the responses from an external system. You should be testing that unit of work only. As mentioned above by prgmtc on how you should be asserting one assert per method is the way to go.
Second, if you are calling a real webservice or database etc, then consider calling those test as integrated or integration test depending upon what you are trying to test.
In my opinion to get the most out of TDD you want to be doing test first development. Have a look at uncle Bobs 3 Rules of TDD.
If you follow these rules strictly, you end up writing tests that generally only have a single assert statements. In reality you will often find you end up with a number of assert statements that act as a single logical assert as it often helps with the understanding of the unit test itself.
Here is an example
[Test]
public void ValidateBankAccount_GivenInvalidAccountType_ShouldReturnValidationFailure()
{
//---------------Set up test pack-------------------
const string validBankAccount = "99999999999";
const string validBranchCode = "222222";
const string invalidAccountType = "99";
const string invalidAccoutTypeResult = "3";
var bankAccountValidation = Substitute.For<IBankAccountValidation>();
bankAccountValidation.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType)
.Returns(invalidAccoutTypeResult);
var service = new BankAccountCheckingService(bankAccountValidation);
//---------------Assert Precondition----------------
//---------------Execute Test ----------------------
var result = service.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType);
//---------------Test Result -----------------------
Assert.IsFalse(result.IsValid);
Assert.AreEqual("Invalid account type", result.Message);
}
And the ValidationResult class that is returned from the service
public interface IValidationResult
{
bool IsValid { get; }
string Message { get; }
}
public class ValidationResult : IValidationResult
{
public static IValidationResult Success()
{
return new ValidationResult(true,"");
}
public static IValidationResult Failure(string message)
{
return new ValidationResult(false, message);
}
public ValidationResult(bool isValid, string message)
{
Message = message;
IsValid = isValid;
}
public bool IsValid { get; private set; }
public string Message { get; private set; }
}
Note I would have unit tests the ValidationResult class itself, but in the test above I feel it gives more clarity to include both Asserts.
Ideally, a test class is written for every class in the production code. In test class, all the test methods may not require the same preconditions. How do we solve this problem?
Do we create separate test classes for these?
I suggest creating separate methods wrapping necessary precondition setup. Do not confuse this approach with traditional test setup. As an example, assume you wrote tests for receipt provider, which searches repository and depending on some validation steps, returns receipt. We might end-up with:
receipt doesn't exist in repository: return null
receipt exists, but doesn't match validator date: return null
receipt exists, matches validator date, but was not fully committed (i.e. was not processed by some external system): return null
We have several conditions here: receipt exists/doesn't exist, receipt is invalid date-wise, receipt is not commited. Our happy path is the default setup (for example done via traditional test setup). Then, happy path test would be as simple as (some C# pseudo-code):
[Test]
public void GetReceipt_ReturnsReceipt()
{
receiptProvider.GetReceipt("701").IsNotNull();
}
Now, for the special condition cases we simply write tiny, dedicated methods that would arrange our test environment (eg. setup dependencies) so that conditions are met:
[Test]
public void GetReceipt_ReturnsNull_WhenReceiptDoesntExist()
{
ReceiptDoesNotExistInRepository("701")
receiptProvider.GetReceipt("701").IsNull();
}
[Test]
public void GetReceipt_ReturnsNull_WhenExistingReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
receiptProvider.GetReceipt("701").IsNull();
}
You'll end up with couple extra helper methods, but your tests will be much easier to read and understand. This is especially helpful when logic is more complicated than simple yes-no setup:
[Test]
public void GetReceipt_ThrowsException_WhenUncommittedReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
ReceiptIsUncommitted("701");
receiptProvider.GetReceipt("701").Throws<Exception>();
}
It's an option to group tests with the same preconditions in the same classes, this also helps avoiding test classes of over a thousand lines. You can also group the creation of the preconditions in seperate methods and let each test call the applicable method. You can do this when most of the methods have different preconditions, otherwise you could just use a setup method that is called before the test.
I like to use a Setup method that will get called before each test runs. In this method I instantiate the class I want to test, giving it any dependencies it needs to be created. Then I will set the specific details for the individual tests inside the test method. It moves any common initialization of the class out to the setup method and allows the test to be focused on what it needs to be evaluated.
You may find this link valuable, it discusses an approach to Test Setups:
In Defense of Test Setup Methods, by Erik Dietrich
Should unit tests test all passing conditions as well as all failing conditions?
For example, imagine I have a test Widget_CannotActiveWidgetIfStateIsCancelled.
And let's say there are 100 possible states.
Can I get away with testing only that I cannot activate my widget when State == Cancelled, or do I have to also test that I CAN activate it in each of the other 99 states?
Is there some compromise that can let me avoid spending all my time writing tests? :)
It seems you are asking whether your tests should be exhaustive: whether you should test for all possible states. The answer is a resounding no, for the simple reason that even simple code can have far too many states. Even small programs can have more potential states than can be tested even if you used all the time there has been since the big bang.
You should instead use equivalence partitioning: identify groups of states, such that all the states in a group are likely to have similar behaviour, then have one test case per group.
If you do that, you might discover you need only two test cases.
This is a scenario where you want to use one parametrized test which gets all 99 values as input.
Using xUnit.net, this could look like this (untested, might contain small compilation errors):
[Fact]
public void Widget_CannotActiveWidgetIfStateIsCancelled()
{
// Arrange ...
sut.State = State.Cancelled;
Assert.False(sut.CanActivate);
}
[Theory, ValidStatesData]
public void Widget_CanActivateWidgetIfStateIsNotCancelled(State state)
{
// Arrange ...
sut.State = state;
Assert.True(sut.CanActivate);
}
private class ValidStatesDataAttribute : DataAttribute
{
public override IEnumerable<object[]> GetData(
MethodInfo methodUnderTest, Type[] parameterTypes)
{
return Enum.GetValues(typeof(State))
.Cast<State>()
.Except(new [] { State.Cancelled })
.Select(x => new object[] { x });
}
}
If you're using NUnit you can use attributes so you only have to code one test but can test all 100 values.
I'm looking for tidy suggestions on how people organise their controller tests.
For example, take the "add" functionality of my "Address" controller,
[AcceptVerbs(HttpVerbs.Get)]
public ActionResult Add()
{
var editAddress = new DTOEditAddress();
editAddress.Address = new Address();
editAddress.Countries = countryService.GetCountries();
return View("Add", editAddress);
}
[RequireRole(Role = Role.Write)]
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Add(FormCollection form)
{
// save code here
}
I might have a fixture called "when_adding_an_address", however there are two actions i need to test under this title...
I don't want to call both actions in my Act() method in my fixture, so I divide the fixture in half, but then how do I name it?
"When_adding_an_address_GET" and "When_adding_an_address_POST"?
things just seems to be getting messy, quickly.
Also, how do you deal with stateless/setupless assertions for controllers, and how do you arrange these wrt the above? for example:
[Test]
public void the_requesting_user_must_have_write_permissions_to_POST()
{
Assert.IsTrue(this.SubjectUnderTest.ActionIsProtectedByRole(c => c.Add(null), Role.Write));
}
This is custom code i know, but you should get the idea, it simply checks that a filter attribute is present on the method. The point is it doesnt require any Arrange() or Act().
Any tips welcome!
Thanks
In my opinion you should forget about naming your tests after the methods you're testing. In fact testing a single method is a strange concept. You should be testing a single thing a client will do with your code. So for example if you can hit add with a POST and a GET you should write two tests like you suggested. If you want to see what happens in a certain exceptional case you should write another test.
I usually pick names that tell a maintainer what he needs to know in Java:
#Test public void shouldRedirectToGetWhenPostingToAdd(){
//...
}
You can do this in any language and pick any *DD naming convention if you like, but the point is that the test name should convey the expectations and the scenario. You will get very small test this way and I consider this a good thing.
Well, 13 months later and no answers. Awesome.
Heres what i do now:
/tests/controllers/address/add/get.cs
/tests/controllers/address/add/valid.cs
/tests/controllers/address/add/invalid.cs