How to avoid unnecessary preconditions in unit testing? - unit-testing

Ideally, a test class is written for every class in the production code. In test class, all the test methods may not require the same preconditions. How do we solve this problem?
Do we create separate test classes for these?

I suggest creating separate methods wrapping necessary precondition setup. Do not confuse this approach with traditional test setup. As an example, assume you wrote tests for receipt provider, which searches repository and depending on some validation steps, returns receipt. We might end-up with:
receipt doesn't exist in repository: return null
receipt exists, but doesn't match validator date: return null
receipt exists, matches validator date, but was not fully committed (i.e. was not processed by some external system): return null
We have several conditions here: receipt exists/doesn't exist, receipt is invalid date-wise, receipt is not commited. Our happy path is the default setup (for example done via traditional test setup). Then, happy path test would be as simple as (some C# pseudo-code):
[Test]
public void GetReceipt_ReturnsReceipt()
{
receiptProvider.GetReceipt("701").IsNotNull();
}
Now, for the special condition cases we simply write tiny, dedicated methods that would arrange our test environment (eg. setup dependencies) so that conditions are met:
[Test]
public void GetReceipt_ReturnsNull_WhenReceiptDoesntExist()
{
ReceiptDoesNotExistInRepository("701")
receiptProvider.GetReceipt("701").IsNull();
}
[Test]
public void GetReceipt_ReturnsNull_WhenExistingReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
receiptProvider.GetReceipt("701").IsNull();
}
You'll end up with couple extra helper methods, but your tests will be much easier to read and understand. This is especially helpful when logic is more complicated than simple yes-no setup:
[Test]
public void GetReceipt_ThrowsException_WhenUncommittedReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
ReceiptIsUncommitted("701");
receiptProvider.GetReceipt("701").Throws<Exception>();
}

It's an option to group tests with the same preconditions in the same classes, this also helps avoiding test classes of over a thousand lines. You can also group the creation of the preconditions in seperate methods and let each test call the applicable method. You can do this when most of the methods have different preconditions, otherwise you could just use a setup method that is called before the test.

I like to use a Setup method that will get called before each test runs. In this method I instantiate the class I want to test, giving it any dependencies it needs to be created. Then I will set the specific details for the individual tests inside the test method. It moves any common initialization of the class out to the setup method and allows the test to be focused on what it needs to be evaluated.
You may find this link valuable, it discusses an approach to Test Setups:
In Defense of Test Setup Methods, by Erik Dietrich

Related

Unit Testing and Integration Testing examples

I have created following four tests in a Test class that tests a findCompany() method of a CompanyService.
#Test
public void findCompany_CompanyIdIsZero() {
exception.expect(IllegalArgumentException.class);
companyService.findCompany(0);
}
#Test
public void findCompany_CompanyIdIsNegative() {
exception.expect(IllegalArgumentException.class);
companyService.findCompany(-100);
}
#Test
public void findCompany_CompanyIdDoesntExistInDatabase() {
Company storedCompany = companyService.findCompany(100000);
assertNull(storedCompany1);
}
#Test
public void findCompany_CompanyIdExistsInDatabase() {
Company company = new Company("FAL", "Falahaar");
companyService.addCompany(company);
Company storedCompany1 = companyService.findCompany(company.getId());
assertNotNull(storedCompany1);
}
My understanding says that the first three of these are unit tests. They test the behavior of the findCompany() method, checking how the method will respond on different inputs.
The fourth test, though placed in the same class, actually seems to be an integration test to me. It requires a Company to be added to the database first, so that it can be found later on. This introduces external dependencies - addCompany() and database.
Am I going right? If yes, then how should I unit test finding an existing object? Just mock the service to "find" one? I think that kills the intent of the test.
I appreciate any guidance here.
I look at it this way: the "unit" you are testing here is the CompanyService. In this sense all of your tests look like unit tests to me. Underneath your service, though, there may be another service (you mention a database) that this test is also exercising? This could start to blur the lines with integration testing a bit, but you have to ask yourself if it matters. You could stub out any such underlying service, and you may want to if:
The underlying service is slow to set up or use, making your unit tests too slow.
You want to be sure the behaviour of this test is unaffected by the underlying service - i.e. this test should only fail if there is a bug in CompanyService.
In my experience, provided the underlying service is fast enough I don't worry too much about my unit test relying on it. I don't mind a bit of integration leaking into my unit tests, as it has benefits (more integration coverage) and rarely causes a problem. If it does cause problems you can always come back to it and add stubbing to improve the isolation.
[1,2,3,4] could be unit-based (mocked | not mocked) and integration-based tests. It depends what you want to test.
Why use mocking? As Jason Sankey said ...test only service tier not underlaying tier.
Why use mocking? Your bussiness logic can have most various forms. So you can write several test for one service method, eg. create person (no address - exception, no bank account - exception, person does not have filled not-null attributes - exception).
Can you imagine that each test requested database in order to test all possibility exception states (no adress, no bank account etc.)? There is too much work to fill database in order to test all exception states. Why not to use mocked objects which eg. act like 'crippled' objects which do not contains expected values. Each test construct own 'crippled' mock object.
Mocking various states === your test will be simply as possible because each test method will be test only one state. These test will be clear and easy to understand and maintance. This is one of goals which I want to reach if I write a test.

Unit test 'structure' of method?

Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}

spring, transaction and unit tests - how to set transaction on class level

I'm using Spring junit runner, and its transaction capabilities to start and rollback transactions before and after every test.
However I have a test class with some heavy DB initialization and I want each test (method) to run within the transaction scope, i.e. start a transaction at the beginning of the test and roll it back after all tests in the class completed.
Are You aware that having all the test methods inside your class within a single transaction will cause a lot of trouble? Basically you can no longer depend on having a clean database as other test methods will modify it along the way. And because the order of test methods is not specified, you cannot depend on it as well (so you'll never know what exactly does the database hold). Essentially You are giving up all test transactional support, your only guarantee is that after running the whole test case, the database will remain clean (so other test cases won't be affected).
End of grumbling thou. I don't think Spring supports such a behavior out-of-the-box (partially due to the reasons highlighted above). However, if You look closely at TransactionalTestExecutionListener, it is responsible for transactional support in Spring-powered tests.
#Override
public void beforeTestMethod(TestContext testContext) throws Exception {
//...
startNewTransaction(testContext, txContext);
}
and:
#Override
public void afterTestMethod(TestContext testContext) throws Exception {
//...
endTransaction(testContext, txContext);
//...
}
Now look even closer, there are unimplemented beforeTestClass and afterTestClass... You will find detailed instructions how to wire this all up in chapter 9.3.5 of Spring reference documentation. Hint: write your own listener and use it instead of TransactionalTestExecutionListener.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}

Initialize object to test in SetUp or during the test method?

I was wondering whether the object to test should be a field and thus set up during a SetUp method (ie. JUnit, nUnit, MS Test, …).
Consider the following examples (this is C♯ with MsTest, but the idea should be similar for any other language and testing framework):
public class SomeStuff
{
public string Value { get; private set; }
public SomeStuff(string value)
{
this.Value = value;
}
}
[TestClass]
public class SomeStuffTestWithSetUp
{
private string value;
private SomeStuff someStuff;
[TestInitialize]
public void MyTestInitialize()
{
this.value = Guid.NewGuid().ToString();
this.someStuff = new SomeStuff(this.value);
}
[TestCleanup]
public void MyTestCleanup()
{
this.someStuff = null;
this.value = string.Empty;
}
[TestMethod]
public void TestGetValue()
{
Assert.AreEqual(this.value, this.someStuff.Value);
}
}
[TestClass]
public class SomeStuffTestWithoutSetup
{
[TestMethod]
public void TestGetValue()
{
string value = Guid.NewGuid().ToString();
SomeStuff someStuff = new SomeStuff(value);
Assert.AreEqual(value, someStuff.Value);
}
}
Of course, with just one test method, the first example is much too long, but with more test methods, this could be safe quite some redundant code.
What are the pros and cons of each approach? Are there any “Best Practices”?
It's a slippery slope once you start initializing fields & generally setting up the context of your test within the test method itself. This leads to large test methods and really really unmanageable fixtures that don't explain themselves very well.
Instead, you should look at the BDD style naming & test organization. Make one fixture per context, rather than one fixture per system-under-test. Then your [setup] truly does setup the context, and your tests can be simple one-liner asserts.
It's much easier to read when you see a test output that does this:
OrderFulfillmentServiceTests.cs
with_an_order_from_a_new_customer
it should check their credit from the credit service
it should give no discount
with valid credit check
it should decrement inventory
it should ship the goods
with a customer in texas or california
it should add appropriate sales tax
with an order from a gold customer
it should NOT check credit
it should get expedited shipping added for free
Our tests are now really good documentation for our system. Each "with_an..." is a test fixture, and the items below it are tests. Within those, you setup the context (the state of the world as the class name describes) and then the test does the simple assert that verifies what the method name says it does.
The second approach is much more readable, and much easier to visually trace.
However, the first approach means less repetition.
What I've found is that I tend to use the SetUp to create objects (especially for things with a number of dependencies), and then set the values used in the test itself. From experience, this provides about the right amount of code-reuse versus readability/traceability.
From talking with Kent Beck about the design of jUnit I know that Test Classes were a way to share setup between Tests, so using the common initialization was the intent. However, along with that, that means splitting tests that require different setup into separate test classes that have revealing names.
Personally, I use Setup and Teardown methods for two distinct reasons, although I assume that others will have different reasons.
Use Setup and Teardown methods when there is common initiation logic that is used by all tests and a single instance of the object(s) created in the Setup are designed to be reused.
Use Setup and Teardown methods when the time it takes for creating and destroying any object(s) created takes enough time to slow down the unit testing process when repeated in each TestMethod.
To give you an idea of how often I run accross these scenarios, in a project that I am working on now, only two of my test classes (out of about eighty) have an explicit need for Setup and Teardown methods, both times it was to satisfy my second reason due to the 10 second max I have enabled for each test execution.
I also prefer the readability of having the object(s) created and destroyed within the TestMethod, although it is not a breaking or selling point for me.
The approach I take is somewhere in the middle - I use TearDown and SetUp to create a test "sandbox" directory (and delete it when done), as well as to initialize some test member variables with some default values that will be used to test the classes. I then set up some "helper methods" - One is generally called InstantiateClass() I use that to call with the default parameters (if any) which I can override as necessary in each explicit test.
[Test]
public void TestSomething()
{
_myVar = "value";
InstantiateClass();
RunTheClass();
Assert.IsTrue(this, that);
}
In practice, I find set up methods make it hard to reason about a test that is failing and have to scroll to somewhere near the top of the file (which can be very large) to figure out what collaborator has broken (not easy with mocking) and there is no clickable reference to navigate in your IDE. In short, you lose spatial locality.
Static helper methods reveal the collaborators more explicitly, and you avoid fields which unnecessarily widen the scope of variables.