DRYing Up EasyMock Tests - unit-testing

It seems like EasyMock tests tend to follow the following pattern:
#Test
public void testCreateHamburger()
{
// set up the expectation
EasyMock.expect(mockFoodFactory.createHamburger("Beef", "Swiss", "Tomato", "Green Peppers", "Ketchup"))
.andReturn(mockHamburger);
// replay the mock
EasyMock.replay(mockFoodFactory);
// perform the test
mockAverager.average(chef.cookFood("Hamburger"));
// verify the result
EasyMock.verify(mockFoodFactory);
}
This works fine for one test, but what happens when I want to test the same logic again in a different method? My first thought is to do something like this:
#Before
public void setUp()
{
// set up the expectation
EasyMock.expect(mockFoodFactory.createHamburger("Beef", "Swiss", "Tomato", "Green Peppers", "Ketchup"))
.andReturn(mockHamburger);
// replay the mock
EasyMock.replay(mockCalculator);
}
#After
public void tearDown()
{
// verify the result
EasyMock.verify(mockCalculator);
}
#Test
public void testCreateHamburger()
{
// perform the test
mockAverager.average(chef.cookFood("Hamburger"));
}
#Test
public void testCreateMeal()
{
// perform the test
mockAverager.average(chef.cookMeal("Hamburger"));
}
There's a few fundamental problems with this approach. The first is that I can't have any variation in my method calls. If I want to test person.cookFood("Turkey Burger"), my set up method wouldn't work. The second problem is that my set up method requires createHamburger to be called. If I call person.cookFood("Salad"), then this might not be applicable. I could use anyTimes() or stubReturn() with EasyMock to avoid this problem. However, these methods only verify if a method is called, it's called with certain parameters, not if the method was actually called.
The only solution that's worked so far is to copy and paste the expectations for every test and vary the parameters. Does anybody know any better ways to test with EasyMock which maintain the DRY principle?

The problems you are running into are because Unit Tests should be DAMP not DRY. Unit tests will tend to repeat themselves. If you can remove the repetition in a safe way (so that it doesnt create unnecessarily coupled tests), then go for it. If not, then don't force it. Unit tests should be quick and easy...if they aren't then you are spending too much time testing instead of writing business value.
Just my two cents. BTW, the Art of Unit Testing by Roy Osherove is a great read on unit testing, and covers this topic.

Related

Mockito Stubbing not working after an exception is stubbed

So, I am trying to unit test a class in various scenarios. We use JUnit V 4.
I have a setUp method wherein i reStub the mock to return an expected mock Value.
I have 4 tests : test1-test4. test1,test2 work fine with the expected mocked value configured in perTestSetup method.
Test t3 needs MockClass to throw an exception, so i configure it seperately in t3. Now t3 works fine as the mock throws the exception as expected.
But when perTestSetup tries to reset the mock to return mockResult before running test4, it fails and throws the same Runtime exception configured in t4. I also tried reset() before mocking in perTestSetup(). But that too fails similarly.
What am i missing here?
#Before
public void perTestSetup(){
when(MockClass.functionCall(...)).thenReturn(mockResult);
}
#Test
public void test1(){
}
#Test
public void test2(){
}
#Test
public void test3(){
when(MockClass.functionCall(...)).thenThrow(new RuntimeExcption());
...
}
#Test
public void test4(){
}
Your perTestSetup() method isn't doing what you think it is doing. The #Before annotation means the test environment will run this method once, before doing any of the tests, rather than once per test. Before I finished reading your question, I was actually itching to advise you to rename this method to simply setup(), as that would be a more accurate description.
Options:
Change the annotation to #BeforeEach, which would then change the behaviour to do what you think it should currently be doing. However, this would be inefficient as in the second two tests you will be defining behaviour and then immediately redefining it.
What do the parameters look like in your functionCall(...)? It may be possible to define two separate behaviours in your single #Before setup() method, i.e.
when(MockClass.functionCall(good values)).thenReturn(mockResult);
when(MockClass.functionCall(bad values)).thenThrow(new RuntimeException());
In each test, call functionCall() with the relevant values for that that particular test.
If the parameters in functionCall() do not readily accommodate the previous approach, consider making two separate instantiations of MockClass, something like
MockClass successfulMockClass = new mock(MockClass.class);
when(successfulMockClass()).thenReturn(mockResult);
MockClass unsuccessfulMockClass = new mock(MockClass.class);
when(unsuccessfulMockClass()).thenThrow(new RuntimeException());
In your tests, call on the relevant mocked object depending on what input you are testing against.
Without being able to see the details of your class, I suspect the second option is what I would go for. It may be worth trying all three to see which feels most intuitive for you, though.

Clarity regarding performing unit test in Spring Boot Application

I got off from writing basic unit tests using JUnit for simple methods that adds two numbers. I can verify the result by using the assert* family of function calls. Now I want to unit test a Spring Boot controller.
Here is the example of the unit test class -
public class MyJunitTest {
private MockMvc mockMvc;
#Mock
private MyService service;
#InjectMocks
private MyController controller;
#Before
public void init() {
MockitoAnnotations.initMocks(this);
}
#Test
public void unitTestGetAssessmentDetails() {
when(service.getTest(Someobject.class)).thenReturn(customObjectWithValues);
Results results = controller.getCall(someRequestObject);
assertEquals(results, someOtherObjectPrefilledWithValues);
}
}
My question is, if I know the values set in customObjectWithValues, then someOtherObjectPrefilledWithValues is also set by me, then assertEquals will always give a pass to the test, right? It's essentially testing if 1==1 kind of test. So what is the point of doing these unit tests? I know that the service object should not connect to the actual DB, and hence we are mocking it. So what is the point of doing these tests? Am I missing the bigger picture here as to how to view unit tests?
P.S. - Please feel free to remove this question if it violates the rules of this forum.
Your test verifies that getCall returns the expected results;
this is a black box test.
Since you are writing a unit test,
this is only sufficient to "pretend" to perform a unit test.
This technique is common in firms that conflate code quality with
unit test code coverage.
A better technique is to identify the steps that will be taken by the controller class and to verify that each was executed (perhaps in the correct order, as well).
I will assume that MyController.getCall looks something like this:
#Autowired
private MyService myService;
public BlammyReturnValueType getCall(final BlammyRequestType request)
{
final BlammyReturnValueType returnValue;
returnValue = myService.someServiceMethod(request);
return returnValue;
}
In this case,
I would add the following to the unitTestGetAssessmentDetails test method:
... The current stuff including the assert.
Mockito.verify(service, times(1)).someServiceMethod(customObjectWithValues);
This will confirm that the service method was called exactly one time,
which is correct in this example.
Ok so, unit testing is more about testing the logic written in your function/controller/service. Now, your function might be very simple or it might be very complex. For ex, your function might be taking and UserId in request, connect to database, gets the data and return it and since you are mocking the database connection, you might feel like if you are passing the mocked object as database response, you will obviously get the same response, so what's the point of testing. It might seem correct to not test at all in this case. But let me give you another example, say you have a very complex function, which takes some UserId, gets the whole year data of users banking history, cumulates it and calculates the amount user earned for this year. Now, think how complex this function is. Now since you have mocked the DB connection you will pass in some data, a lot of computation goes on inside and gives the user an amount earned as Interest on saving. Now, for a given mocked data, you know the answer should come as some X amount. Now, over the time, someone made a mistake (maybe subtracted something which was needed to be added). Now, when you run the test. This test will fail and you know something is wrong with logic. Not here you are not directly expecting the output to be equal to your mocked data, some computation has been done over the data, so to verify after each change that function logic is correct, you need to write a unit test to verify it. Now if you see here, your are not testing 1==1 but something different. This is why people write unit tests, to check their logic inside a unit of code.
Hope this helps.
Usually we have 3 layers which are Controller, Service, Repository (DAO if you prefer). I usually do not test controllers because I do not put any logic in them, I just define the endpoint and call the service. The service is what I heavily unit test so you would inject the mocks into your service. Then you would mock your Repository so it wouldn't try to connect to a database.
#InjectMocks
private MyService service;
#Mock
private MyRepository myrepo
#Test
public void unitTestGetAssessmentDetails() {
when(myrepo.find(someInt)).thenReturn(customObjectWithValues);
Results results = service.serviceMethod(someRequestObject);
assertEquals(results, someOtherObjectPrefilledWithValues);
}
Since controllers are supposed to have no logic they shouldn't need unit test because as you correctly say it's a 1==1 test. But they would be tested by integration tests

What is exactly mocking in unit testing?

I am trying to test an http server functionality and this is the first time I get exposed to the notions of mocks, stubs and fakes. I'm reading this book that it says mocking is describing the state of a complex object at a certain moment, like a snapshot, and testing the object as it is actually the real one. right? I understand this. what I don't understand is what is the point of writing unit tests and assertions of certain cases that we fully describe in the mock object. If we set the expectations of arguments and return values on the mocking object, and we test for those exact values, the test will always pass. Please correct me. I know I am missing something here.
Mocks, Stubs or anything similar serves as a substitute for the actual implementation.
The idea of mocks is basically to test a piece of code in a isolated environment, substituting the dependencies for a fake implementation. For example, in Java, suppose we want to test the PersonService#save method:
class PersonService {
PersonDAO personDAO;
// Set when the person was created and then save to DB
public void save(Person person) {
person.setCreationDate(new Date());
personDAO.save(person);
}
}
A good aproach is creating a unit test like this:
class PersonServiceTest {
// Class under test
PersonService personService;
// Mock
PersonDAO personDAOMock;
// Mocking the dependencies of personService.
// In this case, we mock the PersonDAO. Doing this
// we don't need the DB to test the code.
// We assume that personDAO will give us an expected result
// for each test.
public void setup() {
personService.setPersonDao(personDAOMock)
}
// This test will guarantee that the creationDate is defined
public void saveShouldDefineTheCreationDate() {
// Given a person
Person person = new Person();
person.setCreationDate(null);
// When the save method is called
personService.save(person);
// Then the creation date should be present
assert person.getCreationDate() != null;
}
}
In another words, you should use mock to substitute dependencies of your code to an "actor" that can be configured to return an expected result or assert some behaviours.

Does "unit test only one thing" means one feature or one whole scenario of a unit?

When people say "test only one thing". Does that mean that test one feature at a time or one scenario at a time?
method() {
//setup data
def data = new Data()
//send external webservice call
def success = service.webserviceCall(data)
//persist
if (success) {
data.save()
}
}
Based on the example, do we test by feature of the method:
testA() //test if service.webserviceCall is called properly, so assert if called once with the right parameter
testB() //test if service.webserviceCall succeeds, assert that it should save the data
testC() //test if service.webserviceCall fails, assert that it should not save the data
By scenario:
testA() //test if service.webserviceCall succeeds, so assert if service is called once with the right parameter, and assert that the data should be saved
testB() //test if service.webserviceCall fails, so again assert if service is called once with the right parameter, then assert that it should not save the data
I'm not sure if this is a subjective topic, but I'm trying to do the by feature approach. I got the idea from Roy Osherove's blogs, but I'm not sure if I understood it correct.
It was mentioned there that it would be easier to isolate the errors, but I'm not sure if its overkill. Complex methods will tend to have lots of tests.
(Please excuse my wording on the by feature/scenario, I'm not sure how to word them)
You are right in that this is a subjective topic.
Think about how you want this method to behave, not just on how it's currently implemented. Otherwise your tests will just mirror the production code and will break everytime the implementation changes.
Based on the limited context provided, I'd write the following (separate) tests:
Is the webservice command called with the expected data?
If the command returns successfully, is the data saved? Don't overspecify the arguments provided to your webservice call here, as the previous test covers this.
If it's important that the data is not saved when the command returns a failure, I'd write a third test for this. If it's not important, I wouldn't even bother.
You might have heard the adage "one assert per test". This is good advice in general because a test stops executing as soon as a single assert fails. All asserts further down are not executed. By splitting up the asserts in multiple tests you will receive more feedback when something goes wrong. When tests go red, you know exactly all the asserts that fail and don't have to run through the -fix assertion failure, run tests, fix next assertion failure, repeat- cycle.
So in the terminology you propose, my approach would also be to write a test per feature of the method.
Sidenote: you construct your data object in the method itself and call the save method of that object. How do you sense that the data is saved in your tests?
I understand it like this:
"unit test one thing" == "unit test one behavior"
(After all, it is the behavior that the client wants!)
I would suggest that you approach your testing "one feature at a time". I agree with you where you quoted that with this approach it is "easier to isolate the errors". Roy Osherove really does know what he is talking about especially when it comes to TDD.
In my experience I like to focus on the behaviors that I am trying to test (and I am not particularly referring to BDD here). Essentially I would test each behavior that I am expecting from this code. You said that you are mocking out the dependencies (webservice, and data storage) so I would still class this as a unit test with the following expected behaviors:
a call to this method will result in a particular call to a web service
a successful web service call will result in the data being saved
an unsuccessful web service call will result in the data not being saved
Having tests for these three behaviors will help you isolate any issues with the code immediately.
Your tests should also have no dependency on the actual code written to achieve the behavior. For example, if my implementation called some decorator internal to my class which in turn called the webservice correctly then that should be no concern of my test. My test should only be concerned with the external dependencies and public interface of the class itself.
If I exposed internal methods of my class (or implementation details, such as the decorator mentioned above) for the purposes of testing its particular implementation then I have created brittle tests that will fail when the implementation changes.
In summary, I would recommend that your tests should lock down the behavior of a class and isolate failures to identify the 'unit of behavior' that is failing.
A unit test in general is a test that is done without a call to database or file system or even to that effect doesnot call a webservice either. The idea of a unit test is that if you did not have any internet connection you should be able to unit test. So having said that , if a method calls a webservice or calls a database, then you basically are expected to mock the responses from an external system. You should be testing that unit of work only. As mentioned above by prgmtc on how you should be asserting one assert per method is the way to go.
Second, if you are calling a real webservice or database etc, then consider calling those test as integrated or integration test depending upon what you are trying to test.
In my opinion to get the most out of TDD you want to be doing test first development. Have a look at uncle Bobs 3 Rules of TDD.
If you follow these rules strictly, you end up writing tests that generally only have a single assert statements. In reality you will often find you end up with a number of assert statements that act as a single logical assert as it often helps with the understanding of the unit test itself.
Here is an example
[Test]
public void ValidateBankAccount_GivenInvalidAccountType_ShouldReturnValidationFailure()
{
//---------------Set up test pack-------------------
const string validBankAccount = "99999999999";
const string validBranchCode = "222222";
const string invalidAccountType = "99";
const string invalidAccoutTypeResult = "3";
var bankAccountValidation = Substitute.For<IBankAccountValidation>();
bankAccountValidation.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType)
.Returns(invalidAccoutTypeResult);
var service = new BankAccountCheckingService(bankAccountValidation);
//---------------Assert Precondition----------------
//---------------Execute Test ----------------------
var result = service.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType);
//---------------Test Result -----------------------
Assert.IsFalse(result.IsValid);
Assert.AreEqual("Invalid account type", result.Message);
}
And the ValidationResult class that is returned from the service
public interface IValidationResult
{
bool IsValid { get; }
string Message { get; }
}
public class ValidationResult : IValidationResult
{
public static IValidationResult Success()
{
return new ValidationResult(true,"");
}
public static IValidationResult Failure(string message)
{
return new ValidationResult(false, message);
}
public ValidationResult(bool isValid, string message)
{
Message = message;
IsValid = isValid;
}
public bool IsValid { get; private set; }
public string Message { get; private set; }
}
Note I would have unit tests the ValidationResult class itself, but in the test above I feel it gives more clarity to include both Asserts.

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}