Do I need to write a test to confirm that the response from my function no longer contains a given attribute? - unit-testing

Got a requirement saying that downstream no longer wants to receive a certain attribute in our response. We're not following Semantic Versioning contracts (another story).
Given this is removal of an attribute, should I write a (unit) test to assert it's no longer being returned?
Some on my team make the (interesting) objection:
it adds complexity to the code to mention things that don't exist.
Tried showing the Acceptance Criteria were met.

Related

Mockito: Assertion on output vs Verification

I have been writing my unit tests and most of the times, I just assert on the output values returned by subject object being unit tested and occasionally I use verify calls to make sure that certain methods either never invoked or invoked certain number of times.
Right now, I am being asked by a code reviewer to add verify calls on each mock I am using, this in addition to the assertion I am having on the output. Do you think, it makes worthwhile to add these verify calls?
Though this is dangerously close to being a pure matter of opinion (and thus off-topic for StackOverflow), you used the mockito tag, so I can answer with regard to Mockito's design philosophy as evidenced through blog posts from Mockito's originator Szczepan Faber linked from Mockito's class-level documentation.
Add verify calls in one of two cases:
It is part of the specified behavior to make or not make the call, as in a wrapper over an RPC layer. In cases like these, the external interaction is an implementation-agnostic requirement of a working system, so it makes sense to check the right number of calls with the right parameters.
There is no other user-visible way to determine the state of the object. You could add some, but it might make more design sense to infer the state based on the object's interactions with the environment.
You probably do not need to add verify calls in either of these cases:
To verifyNoMoreInteractions for calls that don't matter or have meaningful side effects ("Should I worry about the unexpected?" blog post from Mockito documentation #8)
To verify methods you've stubbed with non-default results ("Asking and telling" blog link from the code block in Mockito documentation #2), because the test should only produce correct results if the stubbed methods supply the necessary data.
The risk of over-verification here is that a test can become brittle, such that a perfectly-reasonable change of implementation results in a failing test (i.e. different methods are called, or not called, or not called in the same order). If a senior code reviewer tells you to add verifications, I'm not going to say you can't: It's absolutely a judgment call here, and your reviewer may be thinking extra-defensively with regard to your implementation. However, remember that the point of a test is to verify that the implementation conforms to the promises, not that the implementation looks or works a certain way internally. If you get too cavalier with your verifications, it might result in test maintenance difficulties later, while getting you no closer to having working code.
As an additional resource, see Martin Fowler's post "Mocks aren't Stubs", which describes the variety of test doubles (fakes, dummies, stubs, mocks) and some of the tradeoffs about using them alongside or instead of classical state-based testing.

Should we unit test what method actually does, or what it should do?

The question might seem a bit weird, but i'll explain it.
Consider following:
We have a service FirstNameValidator, which i created for other developers so they have a consistent way to validate a person's first name. I want to test it, but because the full set of possible inputs is infinite (or very very big), i only test few cases:
Assert.IsTrue(FirstNameValidator.Validate("John"))
Assert.IsFalse(FirstNameValidator.Validate("$$123"))
I also have LastNameValidator, which is 99% identical, and i wrote a test for it too:
Assert.IsTrue(LastNameValidator.Validate("Doe"))
Assert.IsFalse(LastNameValidator.Validate("__%%"))
But later a new structure appeared - PersonName, which consists of first name and last name. We want to validate it too, so i create a PersonNameValidator. Obviously, for reusability i just call FirstNameValidator and LastNameValidator. Everything is fine till i want to write a test for it.
What should i test?
The fact that FirstNameValidator.Validate was actually called with correct argument?
Or i need to create few cases and test them?
That is actually the question - should we test what service is expected to do? It is expected to validate PersonName, how it does it we actually don't care. So we pass few valid and invalid inputs and expect corresponding return values.
Or, maybe, what it actually does? Like it actually just calls other validators, so test that (.net mocking framework allows it).
Unit tests should be acceptance criteria for a properly functioning unit of code...
they should test what the code should and shouldn't do, you will often find corner cases when you are writing tests.
If you refactor code, you often will have to refactor tests... This should be viewed as part of the original effort, and should bring glee to your soul as you have made the product and process an improvement of such magnitude.
of course if this is a library with outside (or internal, depending on company culture) consumers, you have documentation to consider before you are completely done.
edit: also those tests are pretty weak, you should have a definition of what is legal in each, and actually test inclusion and exclusion of at least all of the classes of glyphps... they can still use related code for testing... ie isValidUsername(name,allowsSpace) could work for both first name and whole name depending on if spaces are allowed.
You have formulated your question a bit strangely: Both options that you describe would test that the function behaves as it should - but in each case on a different level of granularity. In one case you would test the behaviour based on the API that is available to a user of the function. Whether and how the function implements its functionality with the help of other functions/components is not relevant. In the second case you test the behaviour in isolation, including the way the function interacts with its dependended-on components.
On a general level it is not possible to say which is better - depending on the circumstances each option may be the best. In general, isolating a piece of software requires usually more effort to implement the tests and makes the tests more fragile against implementation changes. That means, going for isolation should only be done in situations where there are good reasons for it. Before getting to your specific case, I will describe some situations where isolation is recommendable.
With the original depended-on component (DOC), you may not be able to test everything you want. Assume your code does error handling for the case the DOC returns an error code. But, if the DOC can not easily be made to return an error code, you have difficulty to test your error handling. In this case, if you double the DOC you could make the double return an error code, and thus also test your error handling.
The DOC may have non-deterministic or system-specific behaviour. Some examples are random number generators or date and time functions. If this makes testing your functions difficult, it would be an argument to replace the DOC with some double, so you have control over what is provided to your functions.
The DOC may require a very complex setup. Imagine a complex data base or some complex xml document that needs to be provided. For one thing, this can make your setup quite complicated, for another, your tests get fragile and will likely break if the DOC changes (think that the XML schema changes...).
The setup of the DOC or the calls to the DOC are very time consuming (imagine reading data from a network connection, computing the next chess move, solving the TSP, ...). Or, the use of the DOC prolongs compilation or linking significantly. With a double you can possibly shorten the execution or build time significantly, which is the more interesting the more often you are executing / building the tests.
You may not have a working version of the DOC - possibly the DOC is still under development and is not yet available. Then, with doubles you can start testing nevertheless.
The DOC may be immature, such that with the version you have your tests are instable. In such a case it is likely that you lose trust in your test results and start ignoring failing tests.
The DOC itself may have other dependencies which have some of the problems described above.
These criteria can help to come to an informed decision about whether isolation is necessary. Considering your specific example: The way you have described the situation I get the impression that none of the above criteria is fulfilled. Which for me would lead to the conclusion that I would not isolate the function PersonNameValidator from its DOCs FirstNameValidator and LastNameValidator.

How should I handle unit testing for a bug we don't intend to fix?

A big chunk of our codebase has no unit tests whatsoever. We've been writing unit tests for any new code we add, but we're only just now starting to go back and add unit tests for the existing code. While writing & running unit tests for an existing calculation method, I found a bug: there's a particular input edge case that the calculation does not handle correctly.
The bug has never been detected before because it's not actually reachable in the application; the particular input that hits the bug is a subset of a group of inputs that are trivial and handled directly rather than being sent to the somewhat expensive calculation method. My boss has decided that since the bug can't be hit from the application, it's not worth digging through the calculation method to fix it.
Using XUnit, how should I best mark this bug as something we're aware of but have chosen not to fix? A failed test would break our build automation, so I can't just leave it as is. The input that fails is currently being generated as part of a PropertyData for a Theory.
Does XUnit have a special indicator for this? Should I adjust the method that generates the inputs for the PropertyData to exclude that case, add a comment explaining why, and then put in a skipped Fact covering that case?
You shouldn't have unit tests providing input data that your requirements state are not supported cases. In this case you don't have a bug, you simply have requirements stating that the given input is not supported and is considered invalid.
If you really want to, you can have tests that provide invalid input and assert failure, if you choose to make it an explicit requirement that this input must fail. If you don't want to do that, simply don't create tests for use cases you don't have.
Skipping the fact balances between not running the test but leaving noticeable warning that this is something to take care of in the future.
Alternatively, you can categorize the test and config the runner to skip it, see, e.g., the Category Sample v1,v2

Should you Unit Test Business Logic methods that consists primarily of a query?

I have a business logic layer with a broad spectrum of classes and their corresponding methods. Creating a Unit Test for a method is generally a given but, in some cases, the method really just returns a set of matching records from the database. It seems kind of silly to write a Unit Test by, say, storing five matching records and then calling the BLL method and see if it returns all five records.
What is best practice here? What do you actually do - as opposed to what you'd like to say you would ideally do?
I believe the real question here is, why do you have methods in your Business Logic Layer that don't seem to contain any real business logic?
In this case, it seems like the method in question is just a Repository style method to retrieve records matching some criteria from the database.
In either situation, I would still Unit Test the method. There are several reasons:
Since the method is in the Business Logic Layer (in your case), it's possible that the method could end up becoming more involved. Adding the Unit Test now will ensure that even in the future, no matter the logic, the method is still getting tested for unexpected behavior.
If there is any logic at all (like determining which records match the business criteria), you still have to test that logic.
If you end up moving the method to the Data Layer, you should be testing your query against some mock data repository to make sure your queries work. That way, if your queries become more complex in the future...you're covered.
I use DBUnit to fill in the database with a number of records, more than should be returned by the query. Then, call the method, and make sure that only the right records are returned. This ensures that your query logic works, and helps to catch regression issues in the future if you refactor the database.

Should I create a new test method for each assertion?

I know that this is subjective, but I'd like to follow the most common practice.
Do you normally create one test method for each class method and stuff it with multiple assertions, or do you create one test method per assertion?
For example, if I am testing a bank account's withdraw method, and I want make sure that an exception is thrown if the user tries to overdraw the account or withdraw a negative amount, should I create testOverdaw and testNegativeWithdrawal, or would I just combine those two assertions in a method called testWithdraw?
Think of it this way: each test should stand on its own and exercise a relatively discrete set of functionality. If you want to assert whether three things are true about some method that you have created, then you should create a test that includes those three things.
Thus, I have to strongly disagree with the others who have answered. Arbitrarily limiting yourself to one assertion per test will do nothing for you except make your testing unwieldy and tedious. Ultimately it may put you off testing altogether - which would certainly be a shame: bad for your project and career.
Now, that does not mean you have license to write large, unwieldy or multi-purpose testing routines. Indeed, I don't think I've ever written one that is more than 20 lines or so.
As far as knowing which assertion fails when there are several in one function, you will note that both nUnit and MSTest give you both a description and a link when an assertion fails that will take you right to the offending line (nUnit will require an integration tool such as TestDriven.net). This makes figuring out the failure point trivial. Both will also stop on the first failure in a function and both give you the ability to do a debug walkthrough as well.
Personally I would create one test for each assertion otherwise you have to dig to find the reason for the failure rather than it being obvious from the test name.
If you have to write a few lines of code to set up a test and don't want to duplicate that then depending on your language and test framework you should be able to create a set of tests where each test will execute a block of code before it runs.
Make multiple test methods; do not combine them into one. Each of your test methods should test one behavior of the method. As you are saying, testing with a negative balance is a different behavior then testing with a positive balance. So, that would be two tests.
You want to do it this way so that when a test fails, you are not stuck trying to figure out which part in that test failed.
One way to do it is have one separate method for each different scenario or setup. In your case you'd probably want one scenario where there is sufficient funds, and one scenario where there is insufficient funds. And assert that in the first one everything works, and in the second one the same operations won't work.
I would recommend having one test method per assertion, or rather per a expected behavior. This allows to localize the erroneous code much faster, if any test fails.
I would make those two seperate assertions.
The first represents a valid operation that would happen if a user was using the account regularly, the second would represent a case where data sanitizing was not done, or not properly done.
You want separate test cases so that you can logically implement the test cases as needed, especially in regression scenarios where running all tests can be prohibitively expensive.
testOverdraw and testNegativeWithdrawal are two separate behaviors. They shall be tested separately.
A good rule of thumb is to have only one action on the method under test in one unit test (not counting setup and assertions).
From the NUnit documentation:
"If an assertion fails, the method call does not return and an error is reported. If a test contains multiple assertions, any that follow the one that failed will not be executed. For this reason, it's usually best to try for one assertion per test."
http://www.nunit.org/index.php?p=assertions&r=2.4.6
However, nothing forces you to follow best practices. If it isn't worth your time and effort for this particular project to have 100% granular tests, where one bug means exactly one test failure (and vice-versa), then don't do it. Ultimately it is a tool for you to use at the best cost/benefit balance that you can. That balance depends on a lot of variables that are specific to your scenario, and specific to each project.