Should I test for attributes on Models? - unit-testing

In my unit tests, should I test for attributes on the models?
If I have a model called person, should write a test to make sure that person.name exists, and is required?

Since you did not specify a language I will have to answer generally.
If you have a dynamic language then it is pretty important to ensure that your dynamically generated objects contain all the fields required (and those fields are also populated appropriately.)
As a general rule when writing unit tests simply just write the test, what damage will it do? find a bug?, let someone know that they have broken something which might cause a bug?

It is always difficult to know when testing goes too far - i.e. if you have a method/property on an object that just returns the value found in an attribute, should we bother with tests that are so simple? Our strive for perfection tells us yes, but pragmatically it's not so simple, as you can end up with lots of extra tests that don't add a lot of value.
Constructors have always had a similar problem - if a constructor just takes parameters and saves them as attributes in the object, should we test this?
The approach I take here is for each class to add a test called test_construction to the unit tests for that class. This will construct an object, and then check the values for all of the methods/properties that look up these values. This gives test coverage for both the constructor and the attribute lookups, with minimal overhead of a single test.
However - if you or your team decides not to test these functions I would not worry too much either - any issues are likely to get picked up by other tests, and there are bound to be more important tests for you to write than these.

Related

How do you unit test a function that "resets" an object?

Let's say you have a class that has some arbitrary attributes:
class Data {
String a = '';
int b = 0;
bool c = false;
SomeObject d = SomeObject();
}
Let's also say somewhere you have a function that you want to reset most but not all of this Data object's attributes to those which do not correlate to the object constructor's default values.
Data data = Data();
...
void resetData() {
data = data
..a='reset'
..b=42
..c=true;
// We want to retain [d]'s state, for whatever reason.
}
How do you go about unit testing this behavior?
You could have a unit test that sets each attribute of Data to something entirely different from whatever the reset's default values are and verify that all of the relevant fields change, but that's 1) incredibly brittle and 2) defeats the purpose of what unit testing is for. If you added another object e that's supposed to be reset as well, but you forgot to add it to the resetData function, you almost certainly forgot to add it to the unit test as well. The unit test would then be providing no value, since the behavior would be broken but you would not be alerted to it.
Using reflection/introspection through dart:mirrors is an option by testing that each of that object's variables are indeed different (other than d), but dart:mirrors does not work with AngularDart so users of that are left high and dry.
I also couldn't find any libraries that could "fuzz" objects by seeding the object with garbage values, so not entirely sure how to proceed (or if I should even be wasting my time with this seemingly silly unit test).
Your question goes in a similar direction as the question, whether getters and setters should be unit-tested (see Should unit tests be written for getter and setters?). Your code example for method resetData is, in a sense, even more trivial than setters, because the attributes are assigned constants rather than parameter values.
And, the unit-tests that test the correct setting of the respective attribute would just duplicate that value from the code. The likelihood of findings bugs with such tests is low. And, having a second developer look at the tests is not better than having the second developer review the code itself - a code review would even be a better use of development time. The tests might have a bit of value as regression tests, but again, in most cases where changes are made to the code, the tests have to be maintained just to follow the code changes.
Therefore, similar as for getters and setters and trivial constructors, I would recommend not to write specific tests for resetData, but instead try to make resetData part of some (slightly) larger test scenario by testing the impact of resetData on subsequent computations:
// Setup:
Data data = Data(); // Construction
... // some calculation on data (optional)
// Exercise:
resetData()
... // some calculation on data
// Verify:
...
After all, there should be a reason from a user's perspective why resetData assigns those attributes their specific values. These user focused scenarios could help to make useful tests. (The name resetData, btw., violates the principle of least surprise because people will assume that it resets the values to the initial value.)
The other problem you describe (unit-tests don't tell you that you have not updated resetData if a new attribute was added) indicates you are expecting too much from unit-testing: This phenomenon is not limited to trivial functionality. If you add an attribute and forget to update some complex function to make use of it, plus you leave the tests as they are, then the tests will also continue to pass.
You could think of clever tricks, like, keeping and comparing the list of attributes that were known at the moment the method was written against the current list of attributes (obtained using introspection), but that seems like overkill to me - unless you are developing safety critical code, which I believe dart might not be designed for.

Is there a standard for whether a class under test should be constructed in a fixture or in a test?

I'm just curious, are there any standard guidelines that state whether an instance of a class under test should be constructed in a fixture or in the actual test case?
Thanks!
I'm not aware of a standard reference on that topic. Here's what I'd do:
If I had only one test to write, or if I needed an instance of the class under test that was constructed differently than any other instance of that class in my test suite, I'd just instantiate it in the test. Why make it any more complicated that you have to? If I needed to use the same instance over and over again, I'd put it in a fixture.
I do think it's important to construct only the fixtures you need for a given test case, so that there's nothing to mislead the reader. That means either using whatever scoping mechanism your test framework provides (e.g. an rspec context block or a whole new xUnit TestCase) to construct a given fixture only before the tests that need it, or moving instance construction from fixtures to test. To avoid duplication, you can always write a method to construct an instance and call it from as many tests as you want.
I tend to avoid putting anything inside a fixture.
After a while the CUT state tends to get out of hand as the number of tests in that fixture increase. Each test require a simialr but different behavior which could or could not be added to some initialization/setup method.
Having the CUT in the fixture level creates a shared states between the tests which can cause test failure due to run order - which is a pain to find and fix.
Another readability issue happens when a test fails - people tend to forget the initialization that might have happened in another method.
There are better ways to avoid code duplication - Using AutoMocking container to create objects with fake parameters or Factory methods which enable a different initialization for each test (if required) and create more readable and maintainable tests.

Can I make a unit test inconclusive if a requisite unit test fails?

Consider unit testing a dictionary object. The first unit tests you might write are a few that simply adds items to the dictionary and check exceptions. The next test may be something like testing that the count is accurate, or that the dictionary returns a correct list of keys or values.
However, each of these later cases requires that the dictionary can first reliably add items. If the tests which add items fail, we have no idea whether our later tests fail because of what they're testing is implemented incorrectly, or because the assumption that we can reliably add items is incorrect.
Can I declare a set of unit tests which cause a given unit test to be inconclusive if any of them fail? If not, how should I best work around this? Have I set up my unit tests wrong, that I'm running into this predicament?
It's not as hard as it might seem. Let's rephrase the question a bit:
If I test my piece of code which requires System.Collections.Generic.List<T>.Add to work, what should I do when one day Microsoft decides to break .Add on List<T>? Do I make my tests depending on this to work inconclusive?
Answer to the above is obvious; you don't. You let them fail for one simple reason - your assumptions have failed, and test should fail. It's the same here. Once you get your add tests to work, from that point on you assume add works. You shouldn't treat your tested code any differently than 3rd party tested code. Once it's proven to work, you assume it indeed does.
On a different note, you can utilize concept called guard assertions. In your remove test, after the arrange phase you introduce additional assert phase, which verifies your initial assumptions (in this case - that the add is working). More information about this technique can be found here.
To add an example, NUnit uses the concept above disguised under the name Theory. This does exactly what you proposed (yet it seems to be more related to data driven testing rather than general utility):
The theory itself is responsible for ensuring that all data supplied meets its assumptions. It does this by use of the Assume.That(...) construct, which works just like Assert.That(...) but does not cause a failure. If the assumption is not satisfied for a particular test case, that case returns an Inconclusive result, rather than a Success or Failure.
However, I think what Mark Seemann states in an answer to the question I linked makes the most sense:
There may be many preconditions that need to be satisfied for a given test case, so you may need more than one Guard Assertion. Instead of repeating those in all tests, having one (and one only) test for each precondition keeps your test code more mantainable, since you will have less repetition that way.
Nice question, I often ponder this and had this problem the other day. What I did was get the basics of our collection working using a dictionary behind the scenes. For example:
public class MyCollection
{
private IDictionary<string, int> backingStore;
public MyCollection(IDictionary<string, int> backingStore)
{
_backingStore = backingStore;
}
}
Then we test drove the addition implementation. As we had the dictionary by reference we could assert that after adding items our business logic was correct.
For example the pseudo code for the additon was something like:
public void Add(Item item)
{
// Check we have not added before
// More business logic...
// Add
}
Then the test could be written such as:
var subject = new MyCollection(backingStore);
subject.Add(new Item())
Assert.That(backingStore.Contains(itemThatWeAdded)
We then went on to drive out the other methods such as retrieval, and deletion.
Your question is what should you do with regards the addition breaking, in turn breaking the retrieval. This is a catch 22 scenario. Personally I'd rather ditch the backing store and use this as an implementation detail. So this is what we did. We refactored the tests to use the system under test, rather than the backing store for the asserts. The great thing about the backing store being public initially is it allows you test drive small parts of the codebase, rather than having to implement both addition and retrieval in one go.
The test for addition then looked like the following after we refactored the collection to not expose the backing store.
var subject = new MyCollection();
var item = new Item()
subject.Add(item)
Assert.That(subject.Has(item), Is.True);
In this case I think this is fine. If you can not add items successfully then you sure as hell cannot retrieve anything because you've not added them. As long as your tests are named well any developer seeing some test such as "CanOnlyAddUniqueItemsToCollection" will point future developers in the right direction, in other words, the addition is broken. Just make sure your tests are named well and you should be giving as much help as possible.
I don't see this as too much of a problem. If your Dictionary class is not too big, and the unit test for that class is the only unit test testing that code, then when your add method is broken and multiple tests fail, you still know the problem is in the Dictionary class and can identify it, debug and fix it easily.
Where it becomes a problem is when you have other code smells or design problems such as:
unit tests tests are testing many application classes, using mocks instead can help here.
unit tests are actually system tests creating and testing many application classes at once.
the Dictionary class is too big and complex so when it breaks and tests fail it's difficult to figure out what part is broken.
This is very interesting. We use NUnit and the best I can tell it runs test-methods alphabetically. That might be an overly-artificial way to order your tests, but if you built up your test classes such that alphabetically/numerically-named pre-req methods came first you might accomplish what you want.
I find myself writing a test method, firing just it to watch it fail, and then writing the code to make it pass. When I'm all done I can run the whole class and everything passes - it doesn't matter what order the tests ran in becuase everything 'works' becuase of the incremental dev I did.
Now later on if I break something in the thing i'm testing who knows what all will fail in the harness. I guess it doesn't really matter to me - I've got a long list of failures and I can tease out what went wrong.

Unit Testing a 'SetDefaults()' method

I am in the process of learning to unit test. I have a 'domain object' that doesn't do much apart from hold state (i.e. Employee without any business logic). It has a method SetDefaults() which just fills its state with reasonable values. A simple method.
But when I go to unit test this method all I can think of is to run the method then check that every field is what it should be. Like (in C#):
[TestMethod()]
public void SetDefaultsTest()
{
Employee target = new Employee();
employee.SetDefaults();
Assert.AreEqual(employee.Name, "New Employee");
Assert.AreEqual(employee.Age, 30);
// etc.
}
It feels wrong to duplicate the entire functionality of SetDefaults() within my test. Should I just leave this method untested? The problem is that I'd like a test to ensure that when new properties are added to the class they are also added to the SetDefaults() method.
Trivial getters and setters sometimes don't have unit tests written for them. If that's all that SetDefaults() does, it probably won't hurt to skip it.
One thing you would want to consider testing, though, is that none of the set properties of the employee instance are null after calling SetDefaults():
var nonNullProperties = new object[] { employee.Name, employee.Age, ... };
foreach (var property in nonNullProperties)
Assert.IsNotNull(property);
This makes sense, since you really just care that they are set to some default value, and not so much that they're a specific value.
It depends on what value you get out of that test. Don't test just for the sake of testing. However, if those defaults are important to be correct, and not change, then go right ahead.
A lot of testing involves deciding "Hey, is this going to make my job easier in the long run?" Are these defaults changing all the time, or are they constant? Are the defaults very complicated, or are they a handful of strings and numbers? How important is it to the customer that these defaults are correct? How important is it to the other developers that the defaults are correct?
The test could do a million things, but if it doesn't add some value to someone who cares about it, then don't worry about it. If you've been asked to automate testing of 100% of all your code, perhaps reading up and discussing some of the ideas presented in the following blogs might benefit your team:
http://blog.jayfields.com/2009/02/thoughts-on-developer-testing.html
http://msdn.microsoft.com/en-us/magazine/cc163665.aspx
Otherwise, if it doesn't add much value or constantly breaks, I'd say go ahead and leave it out.
That looks like a pretty reasonable unit test to me. You are providing a simple test that checks the results of calling that method. It doesn't help you with any new properties though, as the existing test would still pass. Perhaps you could use Reflection to iterate over all the properties of the object and check they are non-null?
Also SetDefaults() seems like a slightly odd method. Why not just initialise a new Employee to those values to start with? That way there is no risk that another coder will create an Employee and forget to call SetDefaults().

What is an ObjectMother?

What is an ObjectMother and what are common usage scenarios for this pattern?
ObjectMother starts with the factory pattern, by delivering prefabricated test-ready objects via a simple method call. It moves beyond the realm of the factory by
facilitating the customization of created objects,
providing methods to update the objects during the tests, and
if necessary, deleting the object from the database at the completion of the test.
Some reasons to use ObjectMother:
* Reduce code duplication in tests, increasing test maintainability
* Make test objects super-easily accessible, encouraging developers to write more tests.
* Every test runs with fresh data.
* Tests always clean up after themselves.
(http://c2.com/cgi/wiki?ObjectMother)
See "Test Data Builders: an alternative to the Object Mother pattern" for an argument of why to use a Test Data Builder instead of an Object Mother. It explains what both are.
As stated elsewhere, ObjectMother is a Factory for generating Objects typically (exclusively?) for use in Unit Tests.
Where they are of great use is for generating complex objects where the data is of no particular significance to the test.
Where you might have created an empty instance below such as
Order rubishOrder = new Order("NoPropertiesSet");
_orderProcessor.Process(rubishOrder);
you would use a sensible one from the ObjectMother
Order motherOrder = ObjectMother.SimpleOrder();
_orderProcessor.Process(motherOrder);
This tends to help with situations where the class being tested starts to rely on a sensible object being passed in.
For instance if you added some OrderNumber validation to the Order class above, you would simply need to instantiate the OrderNumber on the SimpleObject class in order for all the existing tests to pass, leaving you to concentrate on writing the validation tests.
If you had just instantiated the object in the test you would need to add it to every test (it is shocking how often I have seen people do this).
Of course, this could just be extracted out to a method, but putting it in a separate class allows it to be shared between multiple test classes.
Another recommended behavior is to use good descriptive names for your methods, to promote reuse. It is all too easy to end up with one object per test, which is definitely to be avoided. It is better to generate objects that represent general rather than specific attributes and then customize for your test. For instance ObjectMother.WealthyCustomer() rather than ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorsche() and ObjectMother.CustomerWith1MdollarsSharesInBigPharmaAndDrivesAPorscheAndAFerrari()