I am in the process of learning to unit test. I have a 'domain object' that doesn't do much apart from hold state (i.e. Employee without any business logic). It has a method SetDefaults() which just fills its state with reasonable values. A simple method.
But when I go to unit test this method all I can think of is to run the method then check that every field is what it should be. Like (in C#):
[TestMethod()]
public void SetDefaultsTest()
{
Employee target = new Employee();
employee.SetDefaults();
Assert.AreEqual(employee.Name, "New Employee");
Assert.AreEqual(employee.Age, 30);
// etc.
}
It feels wrong to duplicate the entire functionality of SetDefaults() within my test. Should I just leave this method untested? The problem is that I'd like a test to ensure that when new properties are added to the class they are also added to the SetDefaults() method.
Trivial getters and setters sometimes don't have unit tests written for them. If that's all that SetDefaults() does, it probably won't hurt to skip it.
One thing you would want to consider testing, though, is that none of the set properties of the employee instance are null after calling SetDefaults():
var nonNullProperties = new object[] { employee.Name, employee.Age, ... };
foreach (var property in nonNullProperties)
Assert.IsNotNull(property);
This makes sense, since you really just care that they are set to some default value, and not so much that they're a specific value.
It depends on what value you get out of that test. Don't test just for the sake of testing. However, if those defaults are important to be correct, and not change, then go right ahead.
A lot of testing involves deciding "Hey, is this going to make my job easier in the long run?" Are these defaults changing all the time, or are they constant? Are the defaults very complicated, or are they a handful of strings and numbers? How important is it to the customer that these defaults are correct? How important is it to the other developers that the defaults are correct?
The test could do a million things, but if it doesn't add some value to someone who cares about it, then don't worry about it. If you've been asked to automate testing of 100% of all your code, perhaps reading up and discussing some of the ideas presented in the following blogs might benefit your team:
http://blog.jayfields.com/2009/02/thoughts-on-developer-testing.html
http://msdn.microsoft.com/en-us/magazine/cc163665.aspx
Otherwise, if it doesn't add much value or constantly breaks, I'd say go ahead and leave it out.
That looks like a pretty reasonable unit test to me. You are providing a simple test that checks the results of calling that method. It doesn't help you with any new properties though, as the existing test would still pass. Perhaps you could use Reflection to iterate over all the properties of the object and check they are non-null?
Also SetDefaults() seems like a slightly odd method. Why not just initialise a new Employee to those values to start with? That way there is no risk that another coder will create an Employee and forget to call SetDefaults().
Related
Let's say you have a class that has some arbitrary attributes:
class Data {
String a = '';
int b = 0;
bool c = false;
SomeObject d = SomeObject();
}
Let's also say somewhere you have a function that you want to reset most but not all of this Data object's attributes to those which do not correlate to the object constructor's default values.
Data data = Data();
...
void resetData() {
data = data
..a='reset'
..b=42
..c=true;
// We want to retain [d]'s state, for whatever reason.
}
How do you go about unit testing this behavior?
You could have a unit test that sets each attribute of Data to something entirely different from whatever the reset's default values are and verify that all of the relevant fields change, but that's 1) incredibly brittle and 2) defeats the purpose of what unit testing is for. If you added another object e that's supposed to be reset as well, but you forgot to add it to the resetData function, you almost certainly forgot to add it to the unit test as well. The unit test would then be providing no value, since the behavior would be broken but you would not be alerted to it.
Using reflection/introspection through dart:mirrors is an option by testing that each of that object's variables are indeed different (other than d), but dart:mirrors does not work with AngularDart so users of that are left high and dry.
I also couldn't find any libraries that could "fuzz" objects by seeding the object with garbage values, so not entirely sure how to proceed (or if I should even be wasting my time with this seemingly silly unit test).
Your question goes in a similar direction as the question, whether getters and setters should be unit-tested (see Should unit tests be written for getter and setters?). Your code example for method resetData is, in a sense, even more trivial than setters, because the attributes are assigned constants rather than parameter values.
And, the unit-tests that test the correct setting of the respective attribute would just duplicate that value from the code. The likelihood of findings bugs with such tests is low. And, having a second developer look at the tests is not better than having the second developer review the code itself - a code review would even be a better use of development time. The tests might have a bit of value as regression tests, but again, in most cases where changes are made to the code, the tests have to be maintained just to follow the code changes.
Therefore, similar as for getters and setters and trivial constructors, I would recommend not to write specific tests for resetData, but instead try to make resetData part of some (slightly) larger test scenario by testing the impact of resetData on subsequent computations:
// Setup:
Data data = Data(); // Construction
... // some calculation on data (optional)
// Exercise:
resetData()
... // some calculation on data
// Verify:
...
After all, there should be a reason from a user's perspective why resetData assigns those attributes their specific values. These user focused scenarios could help to make useful tests. (The name resetData, btw., violates the principle of least surprise because people will assume that it resets the values to the initial value.)
The other problem you describe (unit-tests don't tell you that you have not updated resetData if a new attribute was added) indicates you are expecting too much from unit-testing: This phenomenon is not limited to trivial functionality. If you add an attribute and forget to update some complex function to make use of it, plus you leave the tests as they are, then the tests will also continue to pass.
You could think of clever tricks, like, keeping and comparing the list of attributes that were known at the moment the method was written against the current list of attributes (obtained using introspection), but that seems like overkill to me - unless you are developing safety critical code, which I believe dart might not be designed for.
Consider unit testing a dictionary object. The first unit tests you might write are a few that simply adds items to the dictionary and check exceptions. The next test may be something like testing that the count is accurate, or that the dictionary returns a correct list of keys or values.
However, each of these later cases requires that the dictionary can first reliably add items. If the tests which add items fail, we have no idea whether our later tests fail because of what they're testing is implemented incorrectly, or because the assumption that we can reliably add items is incorrect.
Can I declare a set of unit tests which cause a given unit test to be inconclusive if any of them fail? If not, how should I best work around this? Have I set up my unit tests wrong, that I'm running into this predicament?
It's not as hard as it might seem. Let's rephrase the question a bit:
If I test my piece of code which requires System.Collections.Generic.List<T>.Add to work, what should I do when one day Microsoft decides to break .Add on List<T>? Do I make my tests depending on this to work inconclusive?
Answer to the above is obvious; you don't. You let them fail for one simple reason - your assumptions have failed, and test should fail. It's the same here. Once you get your add tests to work, from that point on you assume add works. You shouldn't treat your tested code any differently than 3rd party tested code. Once it's proven to work, you assume it indeed does.
On a different note, you can utilize concept called guard assertions. In your remove test, after the arrange phase you introduce additional assert phase, which verifies your initial assumptions (in this case - that the add is working). More information about this technique can be found here.
To add an example, NUnit uses the concept above disguised under the name Theory. This does exactly what you proposed (yet it seems to be more related to data driven testing rather than general utility):
The theory itself is responsible for ensuring that all data supplied meets its assumptions. It does this by use of the Assume.That(...) construct, which works just like Assert.That(...) but does not cause a failure. If the assumption is not satisfied for a particular test case, that case returns an Inconclusive result, rather than a Success or Failure.
However, I think what Mark Seemann states in an answer to the question I linked makes the most sense:
There may be many preconditions that need to be satisfied for a given test case, so you may need more than one Guard Assertion. Instead of repeating those in all tests, having one (and one only) test for each precondition keeps your test code more mantainable, since you will have less repetition that way.
Nice question, I often ponder this and had this problem the other day. What I did was get the basics of our collection working using a dictionary behind the scenes. For example:
public class MyCollection
{
private IDictionary<string, int> backingStore;
public MyCollection(IDictionary<string, int> backingStore)
{
_backingStore = backingStore;
}
}
Then we test drove the addition implementation. As we had the dictionary by reference we could assert that after adding items our business logic was correct.
For example the pseudo code for the additon was something like:
public void Add(Item item)
{
// Check we have not added before
// More business logic...
// Add
}
Then the test could be written such as:
var subject = new MyCollection(backingStore);
subject.Add(new Item())
Assert.That(backingStore.Contains(itemThatWeAdded)
We then went on to drive out the other methods such as retrieval, and deletion.
Your question is what should you do with regards the addition breaking, in turn breaking the retrieval. This is a catch 22 scenario. Personally I'd rather ditch the backing store and use this as an implementation detail. So this is what we did. We refactored the tests to use the system under test, rather than the backing store for the asserts. The great thing about the backing store being public initially is it allows you test drive small parts of the codebase, rather than having to implement both addition and retrieval in one go.
The test for addition then looked like the following after we refactored the collection to not expose the backing store.
var subject = new MyCollection();
var item = new Item()
subject.Add(item)
Assert.That(subject.Has(item), Is.True);
In this case I think this is fine. If you can not add items successfully then you sure as hell cannot retrieve anything because you've not added them. As long as your tests are named well any developer seeing some test such as "CanOnlyAddUniqueItemsToCollection" will point future developers in the right direction, in other words, the addition is broken. Just make sure your tests are named well and you should be giving as much help as possible.
I don't see this as too much of a problem. If your Dictionary class is not too big, and the unit test for that class is the only unit test testing that code, then when your add method is broken and multiple tests fail, you still know the problem is in the Dictionary class and can identify it, debug and fix it easily.
Where it becomes a problem is when you have other code smells or design problems such as:
unit tests tests are testing many application classes, using mocks instead can help here.
unit tests are actually system tests creating and testing many application classes at once.
the Dictionary class is too big and complex so when it breaks and tests fail it's difficult to figure out what part is broken.
This is very interesting. We use NUnit and the best I can tell it runs test-methods alphabetically. That might be an overly-artificial way to order your tests, but if you built up your test classes such that alphabetically/numerically-named pre-req methods came first you might accomplish what you want.
I find myself writing a test method, firing just it to watch it fail, and then writing the code to make it pass. When I'm all done I can run the whole class and everything passes - it doesn't matter what order the tests ran in becuase everything 'works' becuase of the incremental dev I did.
Now later on if I break something in the thing i'm testing who knows what all will fail in the harness. I guess it doesn't really matter to me - I've got a long list of failures and I can tease out what went wrong.
Suppose you have a method:
public void Save(Entity data)
{
this.repositoryIocInstance.EntitySave(data);
}
Would you write a unit test at all?
public void TestSave()
{
// arrange
Mock<EntityRepository> repo = new Mock<EntityRepository>();
repo.Setup(m => m.EntitySave(It.IsAny<Entity>());
// act
MyClass c = new MyClass(repo.Object);
c.Save(new Entity());
// assert
repo.Verify(m => EntitySave(It.IsAny<Entity>()), Times.Once());
}
Because later on if you do change method's implementation to do more "complex" stuff like:
public void Save(Entity data)
{
if (this.repositoryIocInstance.Exists(data))
{
this.repositoryIocInstance.Update(data);
}
else
{
this.repositoryIocInstance.Create(data);
}
}
...your unit test would fail but it probably wouldn't break your application...
Question
Should I even bother creating unit tests on methods that don't have any return types* or **don't change anything outside of internal mock?
Don't forget that unit tests isn't just about testing code. It's about allowing you to determine when behaviour changes.
So you may have something that's trivial. However, your implementation changes and you may have a side effect. You want your regression test suite to tell you.
e.g. Often people say you shouldn't test setters/getters since they're trivial. I disagree, not because they're complicated methods, but someone may inadvertently change them through ignorance, fat-finger scenarios etc.
Given all that I've just said, I would definitely implement tests for the above (via mocking, and/or perhaps it's worth designing your classes with testability in mind and having them report status etc.)
It's true your test is depending on your implementation, which is something you should avoid (though it is not really that simple sometimes...) and is not necessarily bad. But these kind of tests are expected to break even if your change doesn't break the code.
You could have many approaches to this:
Create a test that really goes to the database and check if the state was changed as expected (it won't be a unit test anymore)
Create a test object that fakes a database and do operations in-memory (another implementation for your repositoryIocInstance), and verify the state was changed as expected. Changes to the repository interface would incurr in changes to this object as well. But your interfaces shouldn't be changing much, right?
See all of this as too expensive, and use your approach, which may incur on unnecessarily breaking tests later (but once the chance is low, it is ok to take the risk)
Ask yourself two questions. "What is the manual equivalent of this unit test?" and "is it worth automating?". In your case it would be something like:
What is manual equivalent?
- start debugger
- step into "Save" method
- step into next, make sure you're inside IRepository.EntitySave implementation
Is it worth automating? My answer is "no". It is 100% obvious from the code.
From hundreds of similar waste tests I didn't see a single which would turn out to be useful.
The general rule of thumb is, that you test all things, that could probably break. If you are sure, that the method is simple enough (and stays simple enough) to not be a problem, that let it out with testing.
The second thing is, you should test the contract of the method, not the implementation. If the test fails after a change, but not the application, then your test tests not the right thing. The test should cover cases that are important for your application. This should ensure, that every change to the method that doesn't break the application also don't fail the test.
A method that does not return any result still changes the state of your application. Your unit test, in this case, should be testing whether the new state is as intended.
"your unit test would fail but it probably wouldn't break your application"
This is -- actually -- really important to know. It may seem annoying and trivial, but when someone else starts maintaining your code, they may have made a really bad change to Save and (improbably) broken the application.
The trick is to prioritize.
Test the important stuff first. When things are slow, add tests for trivial stuff.
When there isn't an assertion in a method, you are essentially asserting that exceptions aren't thrown.
I'm also struggling with the question of how to test public void myMethod(). I guess if you do decide to add a return value for testability, the return value should represent all salient facts necessary to see what changed about the state of the application.
public void myMethod()
becomes
public ComplexObject myMethod() {
DoLotsOfSideEffects()
return new ComplexObject { rows changed, primary key, value of each column, etc };
}
and not
public bool myMethod()
DoLotsOfSideEffects()
return true;
The short answer to your question is: Yes, you should definitely test methods like that.
I assume that it is important that the Save method actually saves the data. If you don't write a unit test for this, then how do you know?
Someone else may come along and remove that line of code that invokes the EntitySave method, and none of the unit tests will fail. Later on, you are wondering why items are never persisted...
In your method, you could say that anyone deleting that line would only be doing so if they have malign intentions, but the thing is: Simple things don't necessarily stay simple, and you better write the unit tests before things get complicated.
It is not an implementation detail that the Save method invokes EntitySave on the Repository - it is part of the expected behavior, and a pretty crucial part, if I may say so. You want to make sure that data is actually being saved.
Just because a method does not return a value doesn't mean that it isn't worth testing. In general, if you observe good Command/Query Separation (CQS), any void method should be expected to change the state of something.
Sometimes that something is the class itself, but other times, it may be the state of something else. In this case, it changes the state of the Repository, and that is what you should be testing.
This is called testing Inderect Outputs, instead of the more normal Direct Outputs (return values).
The trick is to write unit tests so that they don't break too often. When using Mocks, it is easy to accidentally write Overspecified Tests, which is why most Dynamic Mocks (like Moq) defaults to Stub mode, where it doesn't really matter how many times you invoke a given method.
All this, and much more, is explained in the excellent xUnit Test Patterns.
I'm trying to get my head around TDD methodology and have run into - what I think is - a chicken-and-egg problem: what to do if a bug fix involves the changing of a method's signature.
Consider the following method signature:
string RemoveTokenFromString (string delimited, string token)
As the name suggests, this method removes all instances of a token from delimited and returns the resultant string.
I find later that this method has a bug (e.g. the wrong bits are being removed from the string). So I write a test case describing the scenario where the bug occurs and make sure that the test fails.
When fixing the bug, I find that the method needs more information to be able to do its job properly - and this bit of information can only be sent in as a parameter (the method under test is part of a static class).
What do I do then? If I fix the bug, this compels me to change the unit test - would that be 'correct' TDD methodology?
You have fallen into the most dangerous trap in TDD: you think TDD is about testing, but it isn't. However, it is easy to fall into that trap, since all the terminology in TDD is about testing. This is why BDD was invented: it is essentially TDD, but without the confusing terminology.
In TDD, tests aren't really tests, they are examples. And assertions aren't really assertions, they are expectations. And you aren't dealing with units, you are dealing with behaviors. BDD just calls them that. (Note: BDD has evolved since it was first invented, and it now incorporates things that aren't part of TDD, but the original intention was just "many people do TDD wrong, so use different words to help them do it right".)
Anyway, if you think of a test not as a test, but a behavioral example of how the method should work, it should become obvious that as you develop a better understanding of the expected behavior, deleting or changing the test is not only allowed by TDD, it is the only correct choice! Always keep that in mind!
There is absolutely nothing wrong with bombing your tests, when you discover that the intended behaviour of the unit changes.
//Up front
[Test]
public void should_remove_correct_token_from_string()
{
var text = "do.it.correctly..";
var expected = "doitcorrectly";
Assert.AreEqual(StaticClass.RemoveTokenFromString(text, "."), expected);
}
//After finding that it doesn't do the right thing
//Delete the old test and *design* a new function that
//Does what you want through a new test
//Remember TDD is about design, not testing!
[Test]
public void should_remove_correct_token_from_string()
{
var text = "do.it.correctly..";
var expected = "doitcorrectly";
Assert.AreEqual(
StaticClass.RemoveTokenFromString(
text,
".",
System.Text.Encoding.UTF8), expected);
}
//This will force you to add a new parameter to your function
//Obviously now, there are edge cases to deal with your new parameter etc.
//So more test are required to further design your new function
Keep it simple.
If your unit test is wrong, or obsolete, you have to rewrite it. If your specs change, or certain specs are no longer relevant, your unit tests have to reflect that.
Red, green, refactor also applies to your unit tests, not just the code you are testing.
There is a refactoring called Add Parameter that could help here.
If your language supports method overloading, you could create the new function with the new parameter first, copying the body of the existing function and fixing your problem.
Then when the problem is fixed, you can modify all the tests, one by one, to call the new method. Last you can delete the old method.
With a language that does not support method overloading, create a new function with a different name, copy the body on the existing function in that new function, have the existing function calling the new function, possibly with a dummy value for the new parameter. Then you could have all your tests passing. Make your old tests call the new function, one by one. When the old method is not used anymore, it can be deleted and the new function renamed.
This is a bit process-extensive, but I think this is the TDD way to follow red-green-refactor.
Default value for parameter could also help, if there are available.
Red, Green, Refactor.
Whatever you do, you first want to get to a state where you have a compiling but failing test case that reproduces the bug. You then can proceed on adding just the parameter to the test and the implementation, but do nothing with it so you still have Red.
I'd say don't fret about the 'right'/'correct' way... whatever helps you get you closer to the solution quicker.
If you find that you need to take in an extra parameter,
update the call in the test case
add the new parameter to the actual method
verify that your code builds and the test fails again.
proceed with making it green.
Only in cases where adding a new parameter would result in zillions of compile errors, would I recommend - taking it in baby steps... you dont want to update the whole source base before finding out you really didn't need the third param or you need a fourth one.. time lost. So get the new version of the method 'working' before updating all references. (As philippe says here)
write a new overload with the added parameter
Move the code of the old method into the new overload
Make the old overload relay or delegate to the new overload with some default value for the new param
Now you can get back to the task at hand and get the new test to go green.
If you dont need the old overload anymore, delete it and fix the resulting compile errors.
If a method is not doing the job correctly then it needs to be fixed and if the fix requires change in signature then noting wrong in that. As per TDD you write the test case first which will certainly fail and then you write the method to satisfy the test. As per this approach if the method call in test requires a parameter for it to function then you need to do it.
Given the following SUT, would you consider this unit test to be unnecessary?
**edit : we cannot assume the names will match, so reflection wouldn't work.
**edit 2 : in actuality, this class would implement an IMapper interface and there would be full blown behavioral (mock) testing at the business logic layer of the application. this test just happens to be the lowest level of testing that must be state based. I question whether this test is truly necessary because the test code is almost identical to the source code itself, and based off of actual experience I don't see how this unit test makes maintenance of the application any easier.
//SUT
public class Mapper
{
public void Map(DataContract from, DataObject to)
{
to.Value1 = from.Value1;
to.Value2 = from.Value2;
....
to.Value100 = from.Value100;
}
}
//Unit Test
public class MapperTest()
{
DataContract contract = new DataContract(){... } ;
DataObject do = new DataObject(){...};
Mapper mapper = new Mapper();
mapper.Map(contract, do);
Assert.AreEqual(do.Value1, contract.Value1);
...
Assert.AreEqual(do.Value100, contract.Value100);
}
i would question the construct itself, not the need to test it
[reflection would be far less code]
I'd argue that it is necessary.
However, it would be better as 100 separate unit tests, each that check one value.
That way, when you something go wrong with value65, you can run the tests, and immediately find that value65 and value66 are being transposed.
Really, it's this kind of simple code where you switch your brain off and forget about that errors happen. Having tests in place means you pick them up and not your customers.
However, if you have a class with 100 properties all named ValueXXX, then you might be better using an Array or a List.
It is not excessive. I'm sure not sure it fully focuses on what you want to test.
"Under the strict definition, for QA purposes, the failure of a UnitTest implicates only one unit. You know exactly where to search to find the bug."
The power of a unit test is in having a known correct resultant state, the focus should be the values assigned to DataContract. Those are the bounds we want to push. To ensure that all possible values for DataContract can be successfully copied into DataObject. DataContract must be populated with edge case values.
PS. David Kemp is right 100 well designed tests would be the most true to the concept of unit testing.
Note : For this test we must assume that DataContract populates perfectly when built (that requires separate tests).
It would be better if you could test at a higher level, i.e. the business logic that requires you to create the Mapper.Map() function.
Not if this was the only unit test of this kind in the entire app. However, the second another like it showed up, you'd see me scrunch my eyebrows and start thinking about reflection.
Not Excesive.
I agree the code looks strange but that said:
The beauty of unit test is that once is done is there forever, so if anyone for any reason decides to change that implementation for something more "clever" still the test should pass, so not a big deal.
I personally would probably have a perl script to generate the code as I would get bored of replacing the numbers for each assert, and I would probably make some mistakes on the way, and the perl script (or what ever script) would be faster for me.