Unit testing code that basically does persistence - Should I bother? - unit-testing

I am using an api which interacts with a db. This api has methods for querying, loading and saving elements to the db. I have written integration tests which do things like create a new instance, then check that when I do a query for that instance, the correct instance is found. This is all fine.
I would like to have faster running unit tests for this code but am wondering about the usefulness of any unit test and if they are actually giving me anything. for example, lets say I have a class for saving some element I have via the API. This is psuedo code, but get the idea of how the api I am using works across.
public class ElementSaver
{
private ITheApi m_api;
public bool SaveElement(IElement newElement, IElement linkedElement)
{
IntPtr elemPtr = m_api.CreateNewElement()
if (elemPtr==IntPtr.Zero)
{
return false;
}
if (m_api.SetElementAttribute(elemPtr,newElement.AttributeName,newElement.AttributeValue)==false)
{
return false;
}
if (m_api.SaveElement(elemPtr)==false)
{
return false;
}
IntPtr linkedElemPtr = m_api.GetElementById(linkedElement.Id)
if (linkedElemPtr==IntPtr.Zero)
{
return false;
}
if (m_api.LinkElements(elemPtr,linkedElemPtr)==false)
{
return false;
}
return true;
}
}
is it worth writing unit tests which mock out the m_api member? it seems that I can test that if any of the various calls fail that false is returned, and that if all of the various calls succeed that true is returned, and I could set expectations that the various methods are called with the expected parameters, but is this useful? If I were to refactor this code so that it used some slightly different methods of the api, but achieved the same result, this would break my tests and I would need to change them. This brittleness doesn't seem very useful.
Should I bother with unit tests for code like this, or should I just stick with the integration tests that I've got?

Look at what the tests are like. If you're only testing if stuff that comes in ends up in the database etc. Your probably doing the right thing by only doing automated integration tests. If there's logic you want to test then you might want to look if you can factor out your logic into separate classes that you can unit test and facades around the infrastructure code that contain no logic.

It is a good idea to mock out m_api in my code for the following reasons (not all of which apply to your psuedo-code example):
As you mentioned, you can verify that your class performs error handling properly
In cases where you have more complex code in your class (e.g., cacheing), you can use expectations on your mock to ensure that the class is behaving properly. For example, retrieve the same object twice but ensure that m_api is only called once.
Your unit test can test behavior without creating an appropriate data set. This increase maintainability over time as the data model underneath m_api changes.

Related

How to decide what to mock in Java Unit Tests?

I am trying to write a Unit Tests to a legacy code using Mockito.
But I am not able to understand how do I mock it. Can some please help.
The real problem I am facing is actually I am not able to decide how to make a decision on what exactly is to be mocked? Below is the code. I have looked at numerous videos on YouTube and read many Mockito Tutorials but all of them seem to be guiding mostly about how to use the Mockito Framework.
The basic idea of what to Mock is still unclear. Please guide if you have a better source. I do understand that the code showed below does not really showcase the best coding practice.
public class DataFacade {
public boolean checkUserPresent(String userId){
return getSomeDao.checkUserPresent(userId);
}
private SomeDao getSomeDao() {
DataSource dataSource = MyDataSourceFactory.getMySQLDataSource();
SomeDao someDao = new SomeDao(dataSource);
}
}
Well, a Unittest, as the name implies, tests a unit. You should mock anything that isn't part of that unit, especially external dependencies. For example, a DAO is normally a good example for something that will be mocked in tests where the class under tests uses it, because otherwise you would really have actual data access in your test, making it slower and more prone to failure because of external reasons (for example, if your dao connects to a Datasource, that Datasource's target (for example, the database) may be down, failing your test even if the unit you wanted to test is actually perfectly fine). Mocking the DAO allows you to test things independently.
Of course, your code is bad. Why? You are creating everything in your method by calling some static factory method. I suggest instead using dependency injection to inject the DAO into your facade, for example...
public DataFacade(SomeDao someDao) {
this.someDao = someDao;
}
This way, when instantiating your DataFacade, you can give it a dao, which means, in your test you can give it a mock, for example...
#Test
public void testSomething() {
SomeDao someDaoMock = Mockito.mock(SomeDao.class);
DataFacade toTest = new DataFacade(someDaoMock);
...now you can prepare your mock to do something and then call the DataFace method
}
Dependency injection frameworks like Spring, Google Guice, etc. can make this even easier to manage, but the first step is to stop your classes from creating their own dependencies, but let the dependencies be given to them from the outside, which makes the whole thing a lot better.
You should "mock" the inner objects that you use in your methods.
For example if you write unit tests for DataFacade->checkUserPresent, you should mock the getSomeDao field.
You have a lot of ways to do it, but basically you can make getSomeDao to be public field, or get it from the constructor. In your test class, override this field with mocked object.
After you invoke DataFacade->checkUserPresent method, assert that checkUserPresent() is called.
For exmaple if you have this class:
public class StudentsStore
{
private DbReader _db;
public StudentsStore(DbReader db)
{
_db = db;
}
public bool HasStudents()
{
var studentsCount = _db.GetStudentsCount();
if (studentsCount > 0)
return true;
else
return false;
}
}
And in your test method:
var mockedDb = mock(DbReader.class);
when(mockedDb.GetStudentsCount()).thenReturn(1);
var store = new StudentsSture(mockedDb);
assertEquals(true,store.HasStudents());

Does "unit test only one thing" means one feature or one whole scenario of a unit?

When people say "test only one thing". Does that mean that test one feature at a time or one scenario at a time?
method() {
//setup data
def data = new Data()
//send external webservice call
def success = service.webserviceCall(data)
//persist
if (success) {
data.save()
}
}
Based on the example, do we test by feature of the method:
testA() //test if service.webserviceCall is called properly, so assert if called once with the right parameter
testB() //test if service.webserviceCall succeeds, assert that it should save the data
testC() //test if service.webserviceCall fails, assert that it should not save the data
By scenario:
testA() //test if service.webserviceCall succeeds, so assert if service is called once with the right parameter, and assert that the data should be saved
testB() //test if service.webserviceCall fails, so again assert if service is called once with the right parameter, then assert that it should not save the data
I'm not sure if this is a subjective topic, but I'm trying to do the by feature approach. I got the idea from Roy Osherove's blogs, but I'm not sure if I understood it correct.
It was mentioned there that it would be easier to isolate the errors, but I'm not sure if its overkill. Complex methods will tend to have lots of tests.
(Please excuse my wording on the by feature/scenario, I'm not sure how to word them)
You are right in that this is a subjective topic.
Think about how you want this method to behave, not just on how it's currently implemented. Otherwise your tests will just mirror the production code and will break everytime the implementation changes.
Based on the limited context provided, I'd write the following (separate) tests:
Is the webservice command called with the expected data?
If the command returns successfully, is the data saved? Don't overspecify the arguments provided to your webservice call here, as the previous test covers this.
If it's important that the data is not saved when the command returns a failure, I'd write a third test for this. If it's not important, I wouldn't even bother.
You might have heard the adage "one assert per test". This is good advice in general because a test stops executing as soon as a single assert fails. All asserts further down are not executed. By splitting up the asserts in multiple tests you will receive more feedback when something goes wrong. When tests go red, you know exactly all the asserts that fail and don't have to run through the -fix assertion failure, run tests, fix next assertion failure, repeat- cycle.
So in the terminology you propose, my approach would also be to write a test per feature of the method.
Sidenote: you construct your data object in the method itself and call the save method of that object. How do you sense that the data is saved in your tests?
I understand it like this:
"unit test one thing" == "unit test one behavior"
(After all, it is the behavior that the client wants!)
I would suggest that you approach your testing "one feature at a time". I agree with you where you quoted that with this approach it is "easier to isolate the errors". Roy Osherove really does know what he is talking about especially when it comes to TDD.
In my experience I like to focus on the behaviors that I am trying to test (and I am not particularly referring to BDD here). Essentially I would test each behavior that I am expecting from this code. You said that you are mocking out the dependencies (webservice, and data storage) so I would still class this as a unit test with the following expected behaviors:
a call to this method will result in a particular call to a web service
a successful web service call will result in the data being saved
an unsuccessful web service call will result in the data not being saved
Having tests for these three behaviors will help you isolate any issues with the code immediately.
Your tests should also have no dependency on the actual code written to achieve the behavior. For example, if my implementation called some decorator internal to my class which in turn called the webservice correctly then that should be no concern of my test. My test should only be concerned with the external dependencies and public interface of the class itself.
If I exposed internal methods of my class (or implementation details, such as the decorator mentioned above) for the purposes of testing its particular implementation then I have created brittle tests that will fail when the implementation changes.
In summary, I would recommend that your tests should lock down the behavior of a class and isolate failures to identify the 'unit of behavior' that is failing.
A unit test in general is a test that is done without a call to database or file system or even to that effect doesnot call a webservice either. The idea of a unit test is that if you did not have any internet connection you should be able to unit test. So having said that , if a method calls a webservice or calls a database, then you basically are expected to mock the responses from an external system. You should be testing that unit of work only. As mentioned above by prgmtc on how you should be asserting one assert per method is the way to go.
Second, if you are calling a real webservice or database etc, then consider calling those test as integrated or integration test depending upon what you are trying to test.
In my opinion to get the most out of TDD you want to be doing test first development. Have a look at uncle Bobs 3 Rules of TDD.
If you follow these rules strictly, you end up writing tests that generally only have a single assert statements. In reality you will often find you end up with a number of assert statements that act as a single logical assert as it often helps with the understanding of the unit test itself.
Here is an example
[Test]
public void ValidateBankAccount_GivenInvalidAccountType_ShouldReturnValidationFailure()
{
//---------------Set up test pack-------------------
const string validBankAccount = "99999999999";
const string validBranchCode = "222222";
const string invalidAccountType = "99";
const string invalidAccoutTypeResult = "3";
var bankAccountValidation = Substitute.For<IBankAccountValidation>();
bankAccountValidation.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType)
.Returns(invalidAccoutTypeResult);
var service = new BankAccountCheckingService(bankAccountValidation);
//---------------Assert Precondition----------------
//---------------Execute Test ----------------------
var result = service.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType);
//---------------Test Result -----------------------
Assert.IsFalse(result.IsValid);
Assert.AreEqual("Invalid account type", result.Message);
}
And the ValidationResult class that is returned from the service
public interface IValidationResult
{
bool IsValid { get; }
string Message { get; }
}
public class ValidationResult : IValidationResult
{
public static IValidationResult Success()
{
return new ValidationResult(true,"");
}
public static IValidationResult Failure(string message)
{
return new ValidationResult(false, message);
}
public ValidationResult(bool isValid, string message)
{
Message = message;
IsValid = isValid;
}
public bool IsValid { get; private set; }
public string Message { get; private set; }
}
Note I would have unit tests the ValidationResult class itself, but in the test above I feel it gives more clarity to include both Asserts.

Application Service Layer: Unit Tests, Integration Tests, or Both?

I've got a bunch of methods in my application service layer that are doing things like this:
public void Execute(PlaceOrderOnHoldCommand command)
{
var order = _repository.Load(command.OrderId);
order.PlaceOnHold();
_repository.Save(order);
}
And at present, I have a bunch of unit tests like this:
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
var repository = new Mock<IOrderRepository>();
const int orderId = 1;
var order = new Mock<IOrder>();
repository.Setup(r => r.Load(orderId)).Returns(order.Object);
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository.Object);
service.Execute(command);
repository.Verify(r => r.Load(It.Is<int>(x => x == orderId)), Times.Exactly(1));
}
[Test]
public void PlaceOrderOnHold_CallsPlaceOnHold()
{
/* blah blah */
}
[Test]
public void PlaceOrderOnHold_SavesOrderToRepository()
{
/* blah blah */
}
It seems to be debatable whether these unit tests add value that's worth the effort. I'm quite sure that the application service layer should be integration tested, though.
Should the application service layer be tested to this level of granularity, or are integration tests sufficient?
I'd write a unit test despite there also being an integration test. However, I'd likely make the test much simpler by eliminating the mocking framework, writing my own simple mock, and then combining all those tests to check that the the order in the mock repository was on hold.
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
const int orderId = 1;
var repository = new MyMockRepository();
repository.save(new MyMockOrder(orderId));
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository);
service.Execute(command);
Assert.IsTrue(repository.getOrder(orderId).isOnHold());
}
There's really no need to check to be sure that load and/or save is called. Instead I'd just make sure that the only way that MyMockRepository will return the updated order is if load and save are called.
This kind of simplification is one of the reasons that I usually don't use mocking frameworks. It seems to me that you have much better control over your tests, and a much easier time writing them, if you write your own mocks.
Exactly: it's debatable! It's really good that you are weighing the expense/effort of writing and maintaining your test against the value it will bring you - and that's exactly the consideration you should make for every test you write. Often I see tests written for the sake of testing and thereby only adding ballast to the code base.
As a guideline I usually take that I want a full integration test of every important successful scenario/use case. Other tests I'll write are for parts of the code that are likely to break with future changes, or have broken in the past. And that is definitely not all code. That's where your judgement and insight in the system and requirements comes into play.
Assuming that you have an (integration) test for service.Execute(placeOrderOnHoldCommand), I'm not really sure if it adds value to test if the service loads an order from the repository exactly once. But it could be! For instance when your service previously had a nasty bug that would hit the repository ten times for a single order, causing performance issues (just making it up). In that case, I'd rename the test to PlaceOrderOnHold_LoadsOrderFromRepositoryExactlyOnce().
So for each and every test you have to decide for yourself ... hope that helps.
Notes:
The tests you show can be perfectly valid and look well written.
Your test sequence methods seems to be inspired on the way the Execute(...) method is currently implemented. When you structure your test this way, it could be that you are tying yourself to a specific implementation. This way, tests can actually make it harder to change - make sure you're only testing the important external behavior of your class.
I usually write a single integration test of the primary scenario. By primary scenario i mean the successful path of all the code being tested. Then I write unit tests of all the other scenarios like checking all the cases in a switch, testing exception and so forth.
I think it is important to have both and yes it is possible to test it all with integration tests only, but that makes your tests long running and harder to debug. In average I think I have 10 unit tests per integration test.
I don't bother to test methods with one-liners unless something bussines logic-like happens in that line.
Update: Just to make it clear, cause I'm doing test-driven development I always write the unit tests first and typically do the integration test at the end.

Unit testing Code which use API

I have this simple method which calls the TFS (Team foundation server) API to get WorkItemCollection object. I have just converted in to an entity class and also added it in cache. As you can see this is very simple.
How should i unit test this method. Only the important bit it does is calls TFS API. Is it worth testing such methods? If yes then how should we test it?
One way I can think is I can mock call to Query.QueryWorkItemStore(query) and return an object of type “WorkItemCollection” and see finally this method converts “WorkItemCollection” to List. And check if it was added to cache or not.
Also as I am using dependency injection pattern her so I am injecting dependency for
cache
Query
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
public virtual List<Sprint> Sprint(string query)
{
List<Sprint> sprints =
Cache.Get<List<Sprint>>(query);
if (sprints == null)
{
WorkItemCollection items =
Query.QueryWorkItemStore(query);
sprints = new List<Sprint>();
foreach (WorkItem i in items)
{
Sprint sprint = new Sprint
{
ID = i.Id,
IterationPath = i.IterationPath,
AreaPath = i.AreaPath,
Title = i.Title,
State = i.State,
Goal = i.Description,
};
sprints.Add(sprint);
}
Cache.Add(sprints, query,
this.CacheExpiryInterval);
}
return sprints;
}
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
In your unit tests, you should pass a mock. There are several reasons:
A mock is transparent: it allows you to check that the code under test did the right thing with the mock.
A mock gives you full control, allowing you to test scenarios that are difficult or impossible to create with the real server (e.g. throw IOException)
A mock is predictable. A real server is not - it may not even be available when you run your tests.
Things you do on a mock don't influence the outside world. You don't want to change data or crash the server by running your tests.
A test with mocks is faster. No connection to the server or real database queries have to be made.
That being said, automated integration tests which include a real server are also very useful. You just have to keep in mind that they will have lower code coverage, will be more fragile, and will be more expensive to create/run/maintain. Keep your unit tests and your integration tests separate.
edit: some collaborator objects like your Cache object may also be very unit-test friendly. If they have the same advantages as that of a mock that I list above, then you don't need to create a mock. For example, you typically don't need to mock a collection.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}