Let's say you have a class that has some arbitrary attributes:
class Data {
String a = '';
int b = 0;
bool c = false;
SomeObject d = SomeObject();
}
Let's also say somewhere you have a function that you want to reset most but not all of this Data object's attributes to those which do not correlate to the object constructor's default values.
Data data = Data();
...
void resetData() {
data = data
..a='reset'
..b=42
..c=true;
// We want to retain [d]'s state, for whatever reason.
}
How do you go about unit testing this behavior?
You could have a unit test that sets each attribute of Data to something entirely different from whatever the reset's default values are and verify that all of the relevant fields change, but that's 1) incredibly brittle and 2) defeats the purpose of what unit testing is for. If you added another object e that's supposed to be reset as well, but you forgot to add it to the resetData function, you almost certainly forgot to add it to the unit test as well. The unit test would then be providing no value, since the behavior would be broken but you would not be alerted to it.
Using reflection/introspection through dart:mirrors is an option by testing that each of that object's variables are indeed different (other than d), but dart:mirrors does not work with AngularDart so users of that are left high and dry.
I also couldn't find any libraries that could "fuzz" objects by seeding the object with garbage values, so not entirely sure how to proceed (or if I should even be wasting my time with this seemingly silly unit test).
Your question goes in a similar direction as the question, whether getters and setters should be unit-tested (see Should unit tests be written for getter and setters?). Your code example for method resetData is, in a sense, even more trivial than setters, because the attributes are assigned constants rather than parameter values.
And, the unit-tests that test the correct setting of the respective attribute would just duplicate that value from the code. The likelihood of findings bugs with such tests is low. And, having a second developer look at the tests is not better than having the second developer review the code itself - a code review would even be a better use of development time. The tests might have a bit of value as regression tests, but again, in most cases where changes are made to the code, the tests have to be maintained just to follow the code changes.
Therefore, similar as for getters and setters and trivial constructors, I would recommend not to write specific tests for resetData, but instead try to make resetData part of some (slightly) larger test scenario by testing the impact of resetData on subsequent computations:
// Setup:
Data data = Data(); // Construction
... // some calculation on data (optional)
// Exercise:
resetData()
... // some calculation on data
// Verify:
...
After all, there should be a reason from a user's perspective why resetData assigns those attributes their specific values. These user focused scenarios could help to make useful tests. (The name resetData, btw., violates the principle of least surprise because people will assume that it resets the values to the initial value.)
The other problem you describe (unit-tests don't tell you that you have not updated resetData if a new attribute was added) indicates you are expecting too much from unit-testing: This phenomenon is not limited to trivial functionality. If you add an attribute and forget to update some complex function to make use of it, plus you leave the tests as they are, then the tests will also continue to pass.
You could think of clever tricks, like, keeping and comparing the list of attributes that were known at the moment the method was written against the current list of attributes (obtained using introspection), but that seems like overkill to me - unless you are developing safety critical code, which I believe dart might not be designed for.
Related
In my unit tests, should I test for attributes on the models?
If I have a model called person, should write a test to make sure that person.name exists, and is required?
Since you did not specify a language I will have to answer generally.
If you have a dynamic language then it is pretty important to ensure that your dynamically generated objects contain all the fields required (and those fields are also populated appropriately.)
As a general rule when writing unit tests simply just write the test, what damage will it do? find a bug?, let someone know that they have broken something which might cause a bug?
It is always difficult to know when testing goes too far - i.e. if you have a method/property on an object that just returns the value found in an attribute, should we bother with tests that are so simple? Our strive for perfection tells us yes, but pragmatically it's not so simple, as you can end up with lots of extra tests that don't add a lot of value.
Constructors have always had a similar problem - if a constructor just takes parameters and saves them as attributes in the object, should we test this?
The approach I take here is for each class to add a test called test_construction to the unit tests for that class. This will construct an object, and then check the values for all of the methods/properties that look up these values. This gives test coverage for both the constructor and the attribute lookups, with minimal overhead of a single test.
However - if you or your team decides not to test these functions I would not worry too much either - any issues are likely to get picked up by other tests, and there are bound to be more important tests for you to write than these.
Consider unit testing a dictionary object. The first unit tests you might write are a few that simply adds items to the dictionary and check exceptions. The next test may be something like testing that the count is accurate, or that the dictionary returns a correct list of keys or values.
However, each of these later cases requires that the dictionary can first reliably add items. If the tests which add items fail, we have no idea whether our later tests fail because of what they're testing is implemented incorrectly, or because the assumption that we can reliably add items is incorrect.
Can I declare a set of unit tests which cause a given unit test to be inconclusive if any of them fail? If not, how should I best work around this? Have I set up my unit tests wrong, that I'm running into this predicament?
It's not as hard as it might seem. Let's rephrase the question a bit:
If I test my piece of code which requires System.Collections.Generic.List<T>.Add to work, what should I do when one day Microsoft decides to break .Add on List<T>? Do I make my tests depending on this to work inconclusive?
Answer to the above is obvious; you don't. You let them fail for one simple reason - your assumptions have failed, and test should fail. It's the same here. Once you get your add tests to work, from that point on you assume add works. You shouldn't treat your tested code any differently than 3rd party tested code. Once it's proven to work, you assume it indeed does.
On a different note, you can utilize concept called guard assertions. In your remove test, after the arrange phase you introduce additional assert phase, which verifies your initial assumptions (in this case - that the add is working). More information about this technique can be found here.
To add an example, NUnit uses the concept above disguised under the name Theory. This does exactly what you proposed (yet it seems to be more related to data driven testing rather than general utility):
The theory itself is responsible for ensuring that all data supplied meets its assumptions. It does this by use of the Assume.That(...) construct, which works just like Assert.That(...) but does not cause a failure. If the assumption is not satisfied for a particular test case, that case returns an Inconclusive result, rather than a Success or Failure.
However, I think what Mark Seemann states in an answer to the question I linked makes the most sense:
There may be many preconditions that need to be satisfied for a given test case, so you may need more than one Guard Assertion. Instead of repeating those in all tests, having one (and one only) test for each precondition keeps your test code more mantainable, since you will have less repetition that way.
Nice question, I often ponder this and had this problem the other day. What I did was get the basics of our collection working using a dictionary behind the scenes. For example:
public class MyCollection
{
private IDictionary<string, int> backingStore;
public MyCollection(IDictionary<string, int> backingStore)
{
_backingStore = backingStore;
}
}
Then we test drove the addition implementation. As we had the dictionary by reference we could assert that after adding items our business logic was correct.
For example the pseudo code for the additon was something like:
public void Add(Item item)
{
// Check we have not added before
// More business logic...
// Add
}
Then the test could be written such as:
var subject = new MyCollection(backingStore);
subject.Add(new Item())
Assert.That(backingStore.Contains(itemThatWeAdded)
We then went on to drive out the other methods such as retrieval, and deletion.
Your question is what should you do with regards the addition breaking, in turn breaking the retrieval. This is a catch 22 scenario. Personally I'd rather ditch the backing store and use this as an implementation detail. So this is what we did. We refactored the tests to use the system under test, rather than the backing store for the asserts. The great thing about the backing store being public initially is it allows you test drive small parts of the codebase, rather than having to implement both addition and retrieval in one go.
The test for addition then looked like the following after we refactored the collection to not expose the backing store.
var subject = new MyCollection();
var item = new Item()
subject.Add(item)
Assert.That(subject.Has(item), Is.True);
In this case I think this is fine. If you can not add items successfully then you sure as hell cannot retrieve anything because you've not added them. As long as your tests are named well any developer seeing some test such as "CanOnlyAddUniqueItemsToCollection" will point future developers in the right direction, in other words, the addition is broken. Just make sure your tests are named well and you should be giving as much help as possible.
I don't see this as too much of a problem. If your Dictionary class is not too big, and the unit test for that class is the only unit test testing that code, then when your add method is broken and multiple tests fail, you still know the problem is in the Dictionary class and can identify it, debug and fix it easily.
Where it becomes a problem is when you have other code smells or design problems such as:
unit tests tests are testing many application classes, using mocks instead can help here.
unit tests are actually system tests creating and testing many application classes at once.
the Dictionary class is too big and complex so when it breaks and tests fail it's difficult to figure out what part is broken.
This is very interesting. We use NUnit and the best I can tell it runs test-methods alphabetically. That might be an overly-artificial way to order your tests, but if you built up your test classes such that alphabetically/numerically-named pre-req methods came first you might accomplish what you want.
I find myself writing a test method, firing just it to watch it fail, and then writing the code to make it pass. When I'm all done I can run the whole class and everything passes - it doesn't matter what order the tests ran in becuase everything 'works' becuase of the incremental dev I did.
Now later on if I break something in the thing i'm testing who knows what all will fail in the harness. I guess it doesn't really matter to me - I've got a long list of failures and I can tease out what went wrong.
For example this article introduces them.
What is the benefit?
Static analysis seems cool but at the same time it would prevent the ability to pass null as a parameter in unit test. (if you followed the example in the article that is)
While on the topic of unit testing - given how things are now surely there is no point for code contracts if you already practice automated testing?
Update
Having played with Code Contracts I'm a little disappointed. For example, based on the code in the accepted answer:
public double CalculateTotal(Order order)
{
Contract.Requires(order != null);
Contract.Ensures(Contract.Result<double>() >= 0);
return 2.0;
}
For unit testing, you still have to write tests to ensure that null cannot be passed, and the result is greater than or equal to zero if the contracts are business logic. In other words, if I was to remove the first contract, no tests would break, unless I had specifically had a test for this feature. This is based on not using the static analysis built into the better (ultimate etc...) editions of Visual Studio however.
Essentially they all boil down to an alternate way of writing traditional if statements. My experience actually using TDD, with Code Contracts shows why, and how I went about it.
I don't think unit testing and contracts interfere with each other that much, and if anything contracts should help unit testing since it removes the need to add tedious repetitive tests for invalid arguments. Contracts specify the minimum you can expect from the function, whereas unit tests attempt to validate the actual behaviour for a particular set of inputs. Consider this contrived example:
public class Order
{
public IEnumerable Items { get; }
}
public class OrderCalculator
{
public double CalculateTotal(Order order)
{
Contract.Requires(order != null);
Contract.Ensures(Contract.Result<double>() >= 0);
return 2.0;
}
}
Clearly the code satisfies the contract, but you'd still need unit testing to validate it actually behaves as you'd expect.
What is the benefit?
Let's say that you want to make sure that a method never returns null. Now with unit tests, you have to write a bunch of test cases where you call the method with varying inputs and verify that the output is not null. Trouble is, you can't test all possible inputs.
With code contracts, you just declare that the method never returns null. The static analyzer will then complain if it is not possible to prove that. If it doesn't complain, you know that your assertion is correct for all possible inputs.
Less work, perfect correctness guarantees. What's not to like?
Contracts allow you say what the actual purpose of the code is, as opposed to letting whatever the code does with whatever random arguments are handed it standing as the definition from the point of view of the compiler, or the next reader of the code. This allows significantly better static analysis and code optimization.
For instance, if I declare an integer parameter (using the contract notation) to be in the range of 1 to 10, and I have a local array in my function declared the same size, that is indexed by the parameter, the compiler can tell that there is no possibility of subscript error, thus producing better code.
You can state that null is valid value in a contract.
The purpose of unit testing is to verify dynamically that the code achieves whatever stated purpose it has. Just because you've written a contract for a function, doesn't mean the code does that, or that static analysis can verify the code does that. Unit testing won't go away.
Well it will not interfere with unit-testing in general. But as I saw you mentioned something about TDD.
If I think about it from that perspective I guess it could/may change the procedure from the standard one
create method (just signature)
create Unit test -> implement the test
run the test: let it fail
implement the method, hack it to the end just to make it working
run the test: see it pass
refactor your (possibly messy) method body
(re-run the test just to see you've not broken anything)
This would be the really hard-full-featured unit-testing procedure. In such a context I guess you could insert code contracts between the 1st and 2nd point like
create method (just signature)
insert code contracts for the methods input parameters
create Unit test -> implement the test
...
The advantage I see at the moment is that you can write easier unit tests in the sense that you wouldn't have to check every possible path since some is already taken into account by your defined contracts. It just gives you additional checking, but it wouldn't replace unit testing since there will always be more logic within the code, more path that have to be tested with unit tests as usual.
Edit
Another possibility I didn't consider before would be to add the code contracts in the refactoring part. Basically as additional way of assuring things. But that would somehow be redundant and since people don't like to do redundant stuff...
I am in the process of learning to unit test. I have a 'domain object' that doesn't do much apart from hold state (i.e. Employee without any business logic). It has a method SetDefaults() which just fills its state with reasonable values. A simple method.
But when I go to unit test this method all I can think of is to run the method then check that every field is what it should be. Like (in C#):
[TestMethod()]
public void SetDefaultsTest()
{
Employee target = new Employee();
employee.SetDefaults();
Assert.AreEqual(employee.Name, "New Employee");
Assert.AreEqual(employee.Age, 30);
// etc.
}
It feels wrong to duplicate the entire functionality of SetDefaults() within my test. Should I just leave this method untested? The problem is that I'd like a test to ensure that when new properties are added to the class they are also added to the SetDefaults() method.
Trivial getters and setters sometimes don't have unit tests written for them. If that's all that SetDefaults() does, it probably won't hurt to skip it.
One thing you would want to consider testing, though, is that none of the set properties of the employee instance are null after calling SetDefaults():
var nonNullProperties = new object[] { employee.Name, employee.Age, ... };
foreach (var property in nonNullProperties)
Assert.IsNotNull(property);
This makes sense, since you really just care that they are set to some default value, and not so much that they're a specific value.
It depends on what value you get out of that test. Don't test just for the sake of testing. However, if those defaults are important to be correct, and not change, then go right ahead.
A lot of testing involves deciding "Hey, is this going to make my job easier in the long run?" Are these defaults changing all the time, or are they constant? Are the defaults very complicated, or are they a handful of strings and numbers? How important is it to the customer that these defaults are correct? How important is it to the other developers that the defaults are correct?
The test could do a million things, but if it doesn't add some value to someone who cares about it, then don't worry about it. If you've been asked to automate testing of 100% of all your code, perhaps reading up and discussing some of the ideas presented in the following blogs might benefit your team:
http://blog.jayfields.com/2009/02/thoughts-on-developer-testing.html
http://msdn.microsoft.com/en-us/magazine/cc163665.aspx
Otherwise, if it doesn't add much value or constantly breaks, I'd say go ahead and leave it out.
That looks like a pretty reasonable unit test to me. You are providing a simple test that checks the results of calling that method. It doesn't help you with any new properties though, as the existing test would still pass. Perhaps you could use Reflection to iterate over all the properties of the object and check they are non-null?
Also SetDefaults() seems like a slightly odd method. Why not just initialise a new Employee to those values to start with? That way there is no risk that another coder will create an Employee and forget to call SetDefaults().
Given the following SUT, would you consider this unit test to be unnecessary?
**edit : we cannot assume the names will match, so reflection wouldn't work.
**edit 2 : in actuality, this class would implement an IMapper interface and there would be full blown behavioral (mock) testing at the business logic layer of the application. this test just happens to be the lowest level of testing that must be state based. I question whether this test is truly necessary because the test code is almost identical to the source code itself, and based off of actual experience I don't see how this unit test makes maintenance of the application any easier.
//SUT
public class Mapper
{
public void Map(DataContract from, DataObject to)
{
to.Value1 = from.Value1;
to.Value2 = from.Value2;
....
to.Value100 = from.Value100;
}
}
//Unit Test
public class MapperTest()
{
DataContract contract = new DataContract(){... } ;
DataObject do = new DataObject(){...};
Mapper mapper = new Mapper();
mapper.Map(contract, do);
Assert.AreEqual(do.Value1, contract.Value1);
...
Assert.AreEqual(do.Value100, contract.Value100);
}
i would question the construct itself, not the need to test it
[reflection would be far less code]
I'd argue that it is necessary.
However, it would be better as 100 separate unit tests, each that check one value.
That way, when you something go wrong with value65, you can run the tests, and immediately find that value65 and value66 are being transposed.
Really, it's this kind of simple code where you switch your brain off and forget about that errors happen. Having tests in place means you pick them up and not your customers.
However, if you have a class with 100 properties all named ValueXXX, then you might be better using an Array or a List.
It is not excessive. I'm sure not sure it fully focuses on what you want to test.
"Under the strict definition, for QA purposes, the failure of a UnitTest implicates only one unit. You know exactly where to search to find the bug."
The power of a unit test is in having a known correct resultant state, the focus should be the values assigned to DataContract. Those are the bounds we want to push. To ensure that all possible values for DataContract can be successfully copied into DataObject. DataContract must be populated with edge case values.
PS. David Kemp is right 100 well designed tests would be the most true to the concept of unit testing.
Note : For this test we must assume that DataContract populates perfectly when built (that requires separate tests).
It would be better if you could test at a higher level, i.e. the business logic that requires you to create the Mapper.Map() function.
Not if this was the only unit test of this kind in the entire app. However, the second another like it showed up, you'd see me scrunch my eyebrows and start thinking about reflection.
Not Excesive.
I agree the code looks strange but that said:
The beauty of unit test is that once is done is there forever, so if anyone for any reason decides to change that implementation for something more "clever" still the test should pass, so not a big deal.
I personally would probably have a perl script to generate the code as I would get bored of replacing the numbers for each assert, and I would probably make some mistakes on the way, and the perl script (or what ever script) would be faster for me.