about unit test practice? - unit-testing

Now I have some question of unit test?
Business
when order should display some available coupon list
Code
public List<Coupon> getAvailableCouponsWhenOrder(String productCategory, int productPrice, String shopId, String userId){
// get all available coupons -- that is not used
List<Coupon> couponList = couponMapper.queryCoupons(userId);
// filter coupons
List<Coupon> availableList = new ArrayList<>();
for (Coupon coupon : couponList) {
// some coupon only support specific category, e.g. book, phone
if(!checkCategory(coupon,productCategory)){
continue;
}
// some coupon have use condition , e.g. full 100 then minus 50
if(!checkPrice(coupon,productPrice)){
continue;
}
// you cannot use other shop's coupon
if(!checkShopCoupon(coupon,shopId)){
continue;
}
availableList.add(coupon);
}
return availableList;
}
And have below unit test
#Test
public void test_exclude_coupon_which_belong_to_other_shop(){
String productCategory = "book";
int productPrice = 200;
String shopId = RandomStringUtils.randomAlphanumeric(10);
String userId = RandomStringUtils.randomAlphanumeric(10);
Coupon coupon = new Coupon();
String anotherShopId = RandomStringUtils.randomAlphanumeric(10);
coupon.setShopId(anotherShopId); // other shop's coupon
coupon.setCategory("book"); // only use for buying book
coupon.setFullPrice(200); // full 200 minus 50
coupon.setPrice(50);
when(couponMapper.queryCoupons(userId)).thenReturn(newArrayList(coupon));
List<Coupon> availableCoupons = service.getAvailableCoupons(productCategory, productPrice, shopId, userId);
// check whether exclude other shop's coupon
assertEquals(0, availableCoupons.size());
}
but even if it passed, I still have no confidence that it is right? because maybe previous check is failed, e.g. checkCategory always return false?
So how to know it really executed checkShopCoupon ?

Please stay away from using a mocking framework to test your private methods. The only thing that (probably!) requires mocking is this line:
List<Coupon> couponList = couponMapper.queryCoupons(userId);
It is really simple: you have to make sure that your list couponList has a specifc content. In other words: you create a whole set of different test methods that each pre-sets couponList with different values. And then you check that the list returned matches your expectations.
And for the record: better use the one and only assert that one really needs:
assertThat(service.getAvailableCoupons(productCategory, productPrice, shopId, userId), is(somePredefinedList));
(where is is a Hamcrest matcher method)
Long story short: you probably need to use mocking to gain control over the return value of that call to queryCoupons() - but that is the only part that requires mocking here!

There are 2 things that you need to check in the test: the logic is correct and that the logic is invoked correctly. In your case it can be:
A test to check the logic: CouponMapperTest#queryingCouponReturnsCouponByUserId()
To check whether it's actually invoked you create a component test (without a mock) that invokes getAvailableCouponsWhenOrder() which in turn invokes couponMapper.queryCoupons(userId)
But with the suggested approach there is a risk that you end up with a lot of component tests which could take time to run. To fight with that you need to build a balanced Test Pyramid (here is a thorough example) to ensure that you have a lot of unit tests, fewer component tests and even fewer of system tests. Some advise on that:
Move towards OOP if possible. That means ObjectA contains ObjectB as opposed to containing IdOfObjectB. That would allow you to move logic lower to the domain objects. Which leads to the..
Move domain logic as low as possible (preferably to the domain model where it actually belongs). A.k.a. Rich Model. Then you'll be able to test things without initializing half of the app. And that's faster.
Mocking frameworks add a lot of burden to maintain your tests. If something is refactored you often have to change a whole bunch of tests. Another problem with the mocking frameworks - you test things in isolation while you have to also ensure that the project works as a whole.
So I'd discourage the usage of mocking unless it's a boundary of your app or it's a very heavy dependency.

Related

How to choose TDD starting point in a real world project?

I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...

Unit test 'structure' of method?

Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}

Is it correct to test all of the possible conditions or states of an object returned from a unit?

Imagine unit testing the following sample scenario. Let's say I've mocked out the CustomerDAO in order to return a valid customer and customer orders. Testing this scenario is fairly easy. Except when I get to testing the boolean for whether or not a customer has orders. In some real world scenarios they will not have orders. So do I need to add some condition in my mock DAO that will return a customer with no orders, and then test that as well? Now imagine it is much more complicated and there are several pieces of the DTO that could contain various bits of information depending on the real results coming back from the database. Do I need to test all of those various conditions?
public class Manager {
public CustomerDTO getCustomerInformation() {
CustomerDAO customerDAO = new CustomerDAO();
CustomerDTO customerDTO = new CustomerDTO();
customerDTO.setCustomer(customerDAO.getCustomer(1));
customerDTO.setCustomerOrders(customerDAO.getCustomerOrders(1));
if (!customerDTO.getCustomerOrders.isEmpty()) {
customerDTO.setHasCustomerOrders(true);
}
return customerDTO;
}
}
In short, I think yes you should test that when the DAO returns various things that the expected state exists on the DTO. Otherwise how can you have an confidence that the DTO will be an accurate representation of the data that was in the datastore?
You should create either a mock DAO per test with the required data for the test, or a mock DAO for each state that needs to be tested.
I'd probably have tests like:
CustomerHasOrders_WhenDaoReturnsNoOrders_ReturnsFalse
CustomerHasOrders_WhenDaoReturnsOrders_ReturnsTrue
GetCustomer_WhenDaoReturnsCustomer_CustomerIsSame
GetCustomerOrders_WhenDaoReturnsOrders_OrdersAreTheSame
GetCustomerOrders_WhenDaoReturnsNoOrders_OrdersAreEmpty
and then tests for what happens if any of the calls to the Dao fail...
In this example it seems that the flag is redundant in the DTO, as it is just different representations of other data. You could implement that as an extension method on CustomerDTO, or some logic in the setCustomerOrders. If the DTO is to be sent over the wire and the functionality won't neccessarily be there then you could exclude the property and the client could just do the check to see if there are any orders in the same way you are customerDTO.getCustomerOrders.isEmpty()
You want to test if Manager is converting Customer and it's related data to CustomerDTO. So yes you have to test all those different scenarios. But don't do it all in one test. Each test should reach it's end either finishing with success or failure. What I mean:
// Don't do this
[Test]
public void Manager_Converts_Customer_to_CustomerDTO()
{
// mock setup
var dto = Manager.GetCustomer();
Assert.That(dto, Is.Not.Null);
Assert.That(dto.Firstname, Is.EqualTo("what has been setup in mock"));
Assert.That(dto.Orders.Count, Is.EqualTo(expected_set_in_mock));
Assert.That(dto.Orders[0].Product, Is.EqualTo(expected_set_in_mock_product));
}
Because then you have one test failing when there is one error and when there are 4 errors. You fix one bug and expect test to pass but then it fails again on next line. Put that all asserts in different tests with describing names.

Unit testing Code which use API

I have this simple method which calls the TFS (Team foundation server) API to get WorkItemCollection object. I have just converted in to an entity class and also added it in cache. As you can see this is very simple.
How should i unit test this method. Only the important bit it does is calls TFS API. Is it worth testing such methods? If yes then how should we test it?
One way I can think is I can mock call to Query.QueryWorkItemStore(query) and return an object of type “WorkItemCollection” and see finally this method converts “WorkItemCollection” to List. And check if it was added to cache or not.
Also as I am using dependency injection pattern her so I am injecting dependency for
cache
Query
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
public virtual List<Sprint> Sprint(string query)
{
List<Sprint> sprints =
Cache.Get<List<Sprint>>(query);
if (sprints == null)
{
WorkItemCollection items =
Query.QueryWorkItemStore(query);
sprints = new List<Sprint>();
foreach (WorkItem i in items)
{
Sprint sprint = new Sprint
{
ID = i.Id,
IterationPath = i.IterationPath,
AreaPath = i.AreaPath,
Title = i.Title,
State = i.State,
Goal = i.Description,
};
sprints.Add(sprint);
}
Cache.Add(sprints, query,
this.CacheExpiryInterval);
}
return sprints;
}
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
In your unit tests, you should pass a mock. There are several reasons:
A mock is transparent: it allows you to check that the code under test did the right thing with the mock.
A mock gives you full control, allowing you to test scenarios that are difficult or impossible to create with the real server (e.g. throw IOException)
A mock is predictable. A real server is not - it may not even be available when you run your tests.
Things you do on a mock don't influence the outside world. You don't want to change data or crash the server by running your tests.
A test with mocks is faster. No connection to the server or real database queries have to be made.
That being said, automated integration tests which include a real server are also very useful. You just have to keep in mind that they will have lower code coverage, will be more fragile, and will be more expensive to create/run/maintain. Keep your unit tests and your integration tests separate.
edit: some collaborator objects like your Cache object may also be very unit-test friendly. If they have the same advantages as that of a mock that I list above, then you don't need to create a mock. For example, you typically don't need to mock a collection.

Initialize object to test in SetUp or during the test method?

I was wondering whether the object to test should be a field and thus set up during a SetUp method (ie. JUnit, nUnit, MS Test, …).
Consider the following examples (this is C♯ with MsTest, but the idea should be similar for any other language and testing framework):
public class SomeStuff
{
public string Value { get; private set; }
public SomeStuff(string value)
{
this.Value = value;
}
}
[TestClass]
public class SomeStuffTestWithSetUp
{
private string value;
private SomeStuff someStuff;
[TestInitialize]
public void MyTestInitialize()
{
this.value = Guid.NewGuid().ToString();
this.someStuff = new SomeStuff(this.value);
}
[TestCleanup]
public void MyTestCleanup()
{
this.someStuff = null;
this.value = string.Empty;
}
[TestMethod]
public void TestGetValue()
{
Assert.AreEqual(this.value, this.someStuff.Value);
}
}
[TestClass]
public class SomeStuffTestWithoutSetup
{
[TestMethod]
public void TestGetValue()
{
string value = Guid.NewGuid().ToString();
SomeStuff someStuff = new SomeStuff(value);
Assert.AreEqual(value, someStuff.Value);
}
}
Of course, with just one test method, the first example is much too long, but with more test methods, this could be safe quite some redundant code.
What are the pros and cons of each approach? Are there any “Best Practices”?
It's a slippery slope once you start initializing fields & generally setting up the context of your test within the test method itself. This leads to large test methods and really really unmanageable fixtures that don't explain themselves very well.
Instead, you should look at the BDD style naming & test organization. Make one fixture per context, rather than one fixture per system-under-test. Then your [setup] truly does setup the context, and your tests can be simple one-liner asserts.
It's much easier to read when you see a test output that does this:
OrderFulfillmentServiceTests.cs
with_an_order_from_a_new_customer
it should check their credit from the credit service
it should give no discount
with valid credit check
it should decrement inventory
it should ship the goods
with a customer in texas or california
it should add appropriate sales tax
with an order from a gold customer
it should NOT check credit
it should get expedited shipping added for free
Our tests are now really good documentation for our system. Each "with_an..." is a test fixture, and the items below it are tests. Within those, you setup the context (the state of the world as the class name describes) and then the test does the simple assert that verifies what the method name says it does.
The second approach is much more readable, and much easier to visually trace.
However, the first approach means less repetition.
What I've found is that I tend to use the SetUp to create objects (especially for things with a number of dependencies), and then set the values used in the test itself. From experience, this provides about the right amount of code-reuse versus readability/traceability.
From talking with Kent Beck about the design of jUnit I know that Test Classes were a way to share setup between Tests, so using the common initialization was the intent. However, along with that, that means splitting tests that require different setup into separate test classes that have revealing names.
Personally, I use Setup and Teardown methods for two distinct reasons, although I assume that others will have different reasons.
Use Setup and Teardown methods when there is common initiation logic that is used by all tests and a single instance of the object(s) created in the Setup are designed to be reused.
Use Setup and Teardown methods when the time it takes for creating and destroying any object(s) created takes enough time to slow down the unit testing process when repeated in each TestMethod.
To give you an idea of how often I run accross these scenarios, in a project that I am working on now, only two of my test classes (out of about eighty) have an explicit need for Setup and Teardown methods, both times it was to satisfy my second reason due to the 10 second max I have enabled for each test execution.
I also prefer the readability of having the object(s) created and destroyed within the TestMethod, although it is not a breaking or selling point for me.
The approach I take is somewhere in the middle - I use TearDown and SetUp to create a test "sandbox" directory (and delete it when done), as well as to initialize some test member variables with some default values that will be used to test the classes. I then set up some "helper methods" - One is generally called InstantiateClass() I use that to call with the default parameters (if any) which I can override as necessary in each explicit test.
[Test]
public void TestSomething()
{
_myVar = "value";
InstantiateClass();
RunTheClass();
Assert.IsTrue(this, that);
}
In practice, I find set up methods make it hard to reason about a test that is failing and have to scroll to somewhere near the top of the file (which can be very large) to figure out what collaborator has broken (not easy with mocking) and there is no clickable reference to navigate in your IDE. In short, you lose spatial locality.
Static helper methods reveal the collaborators more explicitly, and you avoid fields which unnecessarily widen the scope of variables.