Are unit tests supposed to check if a class implements an interface using reflection (same question with inheritance)? If no, why?
If the implementation is removed, the code may still compile, and the tests might still be successful (it depends on what the code does).
Unit tests should test anything that may not work. If the programming language doesn't ensure a class implements all methods in a contract, then you'd probably want to check this in tests.
You should test what is important to your code. Things like inheritance, interfaces and the like are at best implementation details which should be hidden away from the raw data result.
That is to say, if your code passes without the inheritance, it probably didn't need the inheritance and it should be cleaned up as cruft.
A few times in my career I had a situation where I wrote this kind of unit test, mostly for testing conventions.
E.g. there was an implicit assumption that all unit tests should inherit from BaseTest (although technically everything worked fine without this, we wanted that for the sake of coherence) and we had a unit test that enforced exactly that :).
So yes, that makes perfect sense, if necessary.
Related
When unit testing, is it better practice to test a class or individual methods?
Most of the examples I've seen, test the class apart from other classes, mocking dependencies between classes. Another method I've played around w/ is mocking methods you're not testing (by overriding) so that you're only testing the code in one method. Thus 1 bug breaks 1 test since the methods are isolated from each other.
I was wondering if there is a standard method and if there are any big disadvantages to isolating each method for testing as opposed to isolating classes.
The phrase unit testing comes from hardware systems testing, and is more or less semantics-free when applied to software. It can get used for anything from isolation testing of a single routine to testing a complete system in headless mode with an in-memory database.
So don't trust anyone who argues that the definition implies there is only one way to do things independently of context; there a variety of ways, some of which are sometimes more useful than others. And presumably every approach a smart person would argue for has at least some value somewhere.
The smallest unit of hardware is the atom, or perhaps some subatomic particle. Some people test software like they were scanning each atom to see if the laws of quantum mechanics still held. Others take a battleship and see if it floats.
Something in between is very likely better. Once you know something about the kind of thing you are producing beyond 'it is software', you can start to come up with a plan that is appropriate to what you are supposed to be doing.
The point of unit testing is to test a unit of code i.e. class.
This gives you confidence that part of the code on its one is doing what is expected.
This is also the first part of the testing process. It helps to catch those pesky bugs as early as possible and having a unit test to demonstrate it makes it easier to fix that further down the line.
Unit testing by definition is testing the smallest piece of written code you can. "Units" are not classes they are methods.
Every public method should have at least 1 unit test, that tests that method specifically.
If you follow the rule above, you will eventually get to where class interactions are being covered. As long as you write 1 test per method, you will cover class interaction as well.
There is probably no one standard answer. Unit tests are for the developer (or they should be), do what is most helpful to you.
One downside of testing individual methods is you may not test the actual behavior of the object. If the mocking of some methods is not accurate that may go undetected. Also mocks are a lot of work, and they tend to make the tests very fragile, because they make the tests care a lot about what specific method calls take place.
In my own code I try whenever possible to separate infrastructure-type dependencies from business logic so that I can write tests of the business logic classes entirely without mocks. If you have a nasty legacy code base it probably makes more sense to test individual methods and mock any collaborating methods of the object, in order to insulate the parts from each other.
Theoretically objects are supposed to be cohesive so it would make sense to test them as a whole. In practice a lot of things are not particularly object-oriented. In some cases it is easier to mock collaborator methods than it is to mock injected dependencies that get called by the collaborators.
Is it generally accepted that you cannot test code unless the code is setup to be tested?
A hypothetical bit of code:
public void QueueOrder(SalesOrder order)
{
if (order.Date < DateTime.Now-20)
throw new Exception("Order is too old to be processed");
...
}
Some would consider refactoring it into:
protected DateTime MinOrderAge;
{
return DateTime.Now-20;
}
public void QueueOrder(SalesOrder order)
{
if (order.Date < MinOrderAge)
throw new Exception("Order is too old to be processed");
...
}
Note: You can come up with even more complicated solutions; involving an IClock interface and factory. It doesn't affect my question.
The issue with changing the above code is that the code has changed. The code has changed without the customer asking for it to be changed. And any change requires meetings and conference calls. And so i'm at the point where it's easier not to test anything.
If i'm not willing/able to make changes: does it make me not able to perform testing?
Note: The above pseudo-code might look like C#, but that's only so it's readable. The question is language agnostic.
Note: The hypothetical code snippet, problem, need for refactoring, and refactoring are hypothetical. You can insert your own hypothetical code sample if you take umbrage with mine.
Note: The above hypothetical code is hypothetical. Any relation to any code, either living or dead, is purely coincidental.
Note: The code is hypothetical, but any answers are not. The question is not subjective: as i believe there is an answer.
Update: The problem here, of course, is that i cannot guarantee that change in the above example didn't break anything. Sure i refactored one piece of code out to a separate method, and the code is logically identical.
But i cannot guarantee that adding a new protected method didn't offset the Virtual Method Table of the object, and if this class is in a DLL then i've just introduced an access violation.
The answer is yes, some code will need to change to make it testable.
But there is likely lots of code that can be tested without having to change it. I would focus on writing tests for that stuff first, then writing tests for the rest when other customer requirements give you the opportunity to refactor it in a testable way.
Code can be written from the start to be testable. If it is not written from the start with testability in mind, you can still test it, you may just run into some difficulties.
In your hypothetical code, you could test the original code by creating a SalesOrder with a date far in the past, or you could mock out DateTime.Now. Having the code refactored as you showed is nicer for testing, but it isn't absolutely necessary.
If your code is not designed to be tested then it is more difficult to test it. In your example you would have to override the DateTime.Now Method which is propably no easy task.
I you think it adds little value to add tests to your code or the changing of existing code is not allowed then you should not do it.
However if you belief in TDD then you should write new code with tests.
You can unit test your original example using a Mock object framework. In this case I would mock the SalesOrder object several times, configuring a different Date value each time, and test. This avoids changing any code that ships and allows you to validate the algorithm in question that the order date is not too far in the past.
For a better overall view of what's possible given the dependencies you're dealing with, and the language features you have at your disposal, I recommend Working Effective with Legacy Code.
This is easy to accomplish in some dynamic languages. For example I can hook inside the import/using statements and replace an actual dependency with a stub one, even if the SUT (System Under Test) uses it as an implicit dependency. Or I can redefine those symbols (classes, methods, functions, etc.). I'm not saying this is the way to go. Things should be refactored, but it is easier to write some characterization tests.
The problem with this sort of code is always, that it's creating and depending on a lot of static classes, framework types, etc. etc. ...
A very good solution to 'inject' fakes for all these objects is Typemock Isolator (which is commercial, but worth every penny). So yes, you certainly can test legacy code, which was written without testability in mind. I've done it on a big project with Typemock and had very good results.
Alternatively to Typemock, you may use the free MS Moles framework, which does basically the same. It's only that it has a quite unintuitive API and is much harder to learn and use.
HTH.
Thomas
Mockito + PowerMock for Mockito.
You'll be able to test almost everything without dramatically changing your code. But some setters will be needed to inject the mocks.
In the article Test for Required Behavior, not Incidental Behavior, Kevlin Henney advises us that:
"[...] a common pitfall in testing is to hardwire tests to the specifics of an implementation, where those specifics are incidental and have no bearing on the desired functionality."
However, when using TDD, I often end up writing tests for incidental behaviour. What do I do with these tests? Throwing them away seems wrong, but the advice in the article is that these tests can reduce agility.
What about separating them into a separate test suite? That sounds like a start, but seems impractical intuitively. Does anyone do this?
In my experience implementation-dependent tests are brittle and will fail massively at the very first refactoring. What I try to do is focus on deriving a proper interface for a class while writing the tests, effectively avoiding such implementation details in the interface. This not only solves the brittle tests, but it also promotes cleaner design.
This still allows for extra tests that check for the risky parts of my selected implementation, but only as extra protection to a good coverage of the "normal" interface of my class.
For me the big paradigma shift came when I started writing tests before even thinking about the implementation. My initial surprise was that it became much easier to generate "extreme" test cases. Then I recognized the improved interface in turn helped shape the implementation behind it. The result is that my code nowadays doesn't do much more than the interface exposes, effectively reducing the need for most "implementation" tests.
During refactoring of the internals of a class, all tests will hold. Only in cases where the exposed interface changes, the test set may need to be extended or modified.
The problem you describe is very real and very easy to encounter when TDD'ing. In general you can say that it isn't testing incidental behavior itself which is a problem, but rather if tons of tests depend on that incidental behavior.
The DRY principle applies to test code as well as to production code. That can often be a good guideline when writing test code. The goal should be that all the 'incidental' behavior you specify along the way is isolated so that only a few tests out of the entire test suite use them. In that way, if you need to refactor that behavior, you only need to modify a few tests instead of a large fraction of the entire test suite.
This is best achieved by copious use of interfaces or abstract classes as collaborators, because this means that you get low class coupling.
Here's an example of what I mean. Assume that you have some kind of MVC implementation where a Controller should return a View. Assume that we have a method like this on a BookController:
public View DisplayBookDetails(int bookId)
The implementation should use an injected IBookRepository to get the book from the database and then convert that to a View of that book. You could write a lot of tests to cover all aspects of the DisplayBookDetails method, but you could also do something else:
Define an additional IBookMapper interface and inject that into the BookController in addition to the IBookRepository. The implementation of the method could then be something like this:
public View DisplayBookDetails(int bookId)
{
return this.mapper.Map(this.repository.GetBook(bookId);
}
Obviously this is a too simplistic example, but the point is that now you can write one set of tests for your real IBookMapper implementation, which means that when you test the DisplayBookDetails method, you can just use a Stub (best generated by a dynamic mock framework) to implement the mapping, instead of trying to define a brittle and complex relationship between a Book Domain object and how it is mapped.
The use of an IBookMaper is definitely an incidental implementation detail, but if you use a SUT Factory or better yet an auto-mocking container, the definition of that incidental behavior is isolated which means that you if later on you decide to refactor the implementation, you can do that by only changing the test code in a few places.
"What about separating them into a separate test suite?"
What would you do with that separate suite?
Here's the typical use case.
You wrote some tests which test implementation details they shouldn't have tested.
You factor those tests out of the main suite into a separate suite.
Someone changes the implementation.
Your implementation suite now fails (as it should).
What now?
Fix the implementation tests? I think not. The point was to not test an implementation because it leads to way to much maintenance work.
Have tests that can fail, but the overall unittest run is still considered good? If the tests fail, but the failure doesn't matter, what does that even mean? [Read this question for an example: Non-critical unittest failures An ignored or irrelevant test is just costly.
You have to discard them.
Save yourself some time and aggravation by discarding them now, not when they fail.
I you really do TDD the problem is not so big as it may seem at once because you are writing tests before code. You should not even think about any possible implementation before writing test.
Such problems of testing incidental behavior is much more common when you write tests after implementation code. Then the easy way is just checking that the function output is OK and does what you want, then writing test using that output. Really that's cheating, not TDD, and the cost of cheating is tests that will break if implementation change.
The good thing is that such tests will break yet more easily than good tests (good test meaning here tests depending only of the wanted feature, not implementation dependent). Having tests so generic they never break is quite worse.
Where I work what we do is simply fix such tests when we stumble upon them. How we fix them depends on the kind of incidental test performed.
the most common such test is probably the case where testing results occurs in some definite order overlooking this order is really not guaranteed. The easy fix is simple enough: sort both result and expected result. For more complex structures use some comparator that ignore that kind of differences.
every so often we test innermost function, while it's some outer most function that perform the feature. That's bad because refactoring away the innermost function becomes difficult. The solution is to write another test covering the same feature range at outermost function level, then remove the old test, and only then we can refactor the code.
when such test break and we see an easy way to make them implementation independant we do it. Yet, if it's not easy we may choose to fix them to still be implementation dependant but depending on the new implementation. Tests will break again at the next implementation change, but it's not necessarily a big deal. If it's a big deal then definitely throw away that test and find another one to cover that feature, or change the code to make it easier to test.
another bad case is when we have written tests using some Mocked object (used as stub) and then the mocked object behavior change (API Change). This one is bad because it does not break code when it should because changing the mocked object behavior won't change the Mock mimicking it. The fix here is to use the real object instead of the mock if possible, or fix the Mock for new behavior. In that case both the Mock behavior and the real object behavior are both incidental, but we believe tests that does not fail when they should are a bigger problem than tests breaking when they shouldn't. (Admitedly such cases can also be taken care of at integration tests level).
Presume you have a class which passes all its current unit tests.
If you were to add or pull out some methods/introduce a new class and then use composition to incorporate the same functionality would the new class require testing?
I'm torn between whether or not you should so any advice would be great.
Edit:
Suppose I should have added I use DI (Dependency Injection) therefore should I inject the new class as well?
Not in the context of TDD, no, IMHO. The existing tests justify everything about the existence of the class. If you need to add behavior to the class, that would be the time to introduce a test.
That being said, it may make your code and tests clearer to move the tests into a class that relates to the new class you made. That depends very much on the specific case.
EDIT: After your edit, I would say that that makes a good case for moving some existing tests (or a portion of the existing tests). If the class is so decoupled that it requires injection, then it sounds like the existing tests may not be obviously covering it if they stay where they are.
Initially, no, they're not necessary. If you had perfect coverage, extracted the class and did nothing more, you would still have perfect coverage (and those tests would confirm that the extraction was indeed a pure refactoring).
But eventually - and probably soon - yes. The extracted class is likely to be used outside its original context, and you want to constrain its behavior with tests that are specific to the new class, so that changes for a new context don't inadvertently affect behavior for the original caller. Of course the original tests would still reveal this, but good unit tests point directly to the problematic unit, and the original tests are now a step removed.
It's also good to have the new tests as executable documentation for the newly-extracted class.
Well, yes and no.
If I understand correctly, you have written tests, and wrote production code that makes the tests pass - i.e. the simplest thing that works.
Now you are in the refactoring phase. You want to extract code from one class and put it in a class of its own, probably to keep up with the Single Responsibility Principle (or SRP).
You may make the refactoring without adding tests, since your tests are there precisely to allow you to refactor without fear. Remember - refactor means changing the code, without modifying the functionality.
However, it is quite likely that refactoring the code will break your tests. This is most likely caused by fragile tests that test behavior, rather than state - i.e. you mocked the the methods you ported out.
On the other hand, if your tests are primarily state-driven (i.e. you assert results, and ignore implementation), then your new service component (the block of code you extracted to a new class) will not be tested. If you use some form of code coverage testing tool, you'll find out. If that is the case, you may wish to test that it works. Might, because 100% Code Coverage is neither desirable nor feasible. If possible, I'd try to add the tests for that service.
In the end, it may very well boil down to a judgment call.
I would say no. It is already being tested by the tests run on the old class.
As others have said, it's probably not entirely needed right away, since all the same stuff is still under test. But once you start making changes to either of those two classes individually, you should separate the tests.
Of course, the tests shouldn't be too hard to write; since you have the stuff being tested already, it should be fairly trivial to break out the various bits of the tests.
This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.