What is a "Unit"? [closed] - unit-testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In the context of unit testing, what is a "unit"?

I usually define it as a single code execution path through a single method. That comes from the rule of thumb that the number of unit tests required to test a method is equal to or greater than the method's cyclomatic complexity number.

While the definition can vary, a "unit" is a stand-alone piece of code.
Usually, it's a single Class.
However, few classes exist in isolation. So, you often have to mock up the classes that collaborate with your class under test.
Therefore, a "unit" (also called a "fixture") is a single testable thing -- usually a class plus mock-ups for collaborators.
You can easily test a package of related classes using the unit test technology. We do this all the time. There are few or no mocks in these fixtures.
In fact, you can test whole stand-alone application programs as single "units". We do this, also. Providing a fixed set of inputs and outputs to be sure the overall application does things correctly.

A unit is any element that can be tested in isolation. Thus, one will almost always be testing methods in an OO environment, and some class behaviours where there is close coupling between methods of that class.

In my experience the debate about "what is a unit" is a waste of time.
What matters far more is "how fast is the test?" Fast tests run at the rate of 100+/second. Slow tests run slow enough that you don't reflexively run them every time you pause to think.
If your tests are slow you won't run them as frequently, making the time between bug injection and bug detection longer.
If your tests are slow you probably aren't getting the design benefits of unit testing.
Want fast tests? Follow Michael Feather's rules of unit testing.
But if your tests are fast and they're helping you write your code, who cares what label they have?

We define 'unit' to be a single class.
As you rightly assert 'unit' is an ambiguous term and this leads to confusion when developers simply use the expression without adding detail. Where I work we have taken the time to define what we mean when we say 'unit test', 'acceptance test', etc. When someone new joins the team they learn the definitions that we have.
From a practical point of view there will likely always be differences of opinion about what a 'unit' is. I have found that what is important is simply that the term is used consistently within the context of a project.

Were I work a 'unit' is a function. That is because we are not allowed to use any thing other than functional decomposition in our design (no OOP). I agree 100% with Will's answer. At least that is my answer within the paradigm of my work in embedded programming for engine and flight controls and various other systems.

Can be different things. A Class, a Module, a File, ... Choose your desired granularity of testing.

I would say a unit is a 'black box' which may be used within an application. It is something which has a known interface and delivers a well-defined result. This is something which has to work according to a design-spec and must be testable.
Having said that, I often use unit testing when building items within such 'black boxes' just as a development aid.

My understanding (or rationale) is that units should follow a hierarchy of abstraction and scope similar to the hierarchical decomposition of your code.
A method is a small and often atomic (conceptually) operation at a low level of abstraction, so it should be tested
A class is a mid-level concept that offers services and states and should therefore be tested.
A whole module (especially if its components are hidden) is a high-level concept with a limited interface, so it should be tested, etc.
Since many bugs arise from the interactions between multiple methods and classes, I do not see how unit testing only individual methods can achieve coverage, until you already have methods that make use of every important combination, but that would indicate that you didn't test enough before writing client code.

That's not important. Of course it's normal you ask to get started on unit testing, but I repeat it, it's not important.
The unit is something along the lines of :
the method invoked by the test. In OOP this method has to been invoked on an instance of a class (except static methods)
a function in procedural languages.
But, the "unit", function or method, may also invoke another "unit" from the application, which is likewise exercised by the test. So the "unit" can span over several functions or even several classes.
"The test is more important than the unit" (testivus). A good test shall be :
Automatic - execution and diagnostic
Fast - you'll run them very often
Atomic - a test shall test only one thing
Isolated - tests shall not depend on each other
Repeatable - result shall be deterministic

I would say that a unit in unit testing is a single piece of responsibility for a class.
Of course this opinion comes from the way we work:
In my team we use the term unit tests for tests where we test the responsibilities of a single class. We use mock objects to cover for all other objects so that the classes responsibilities are truly isolated and not affected if other objects would have errors in them.
We use the term integration tests for tests where we test how two or more classes are functioning together, so that we see that the actual functionality is there.
Finally we help our customers to write acceptance tests, which operate on the application as a whole as to see what actually happens when a "user" is working with the application.
This is what makes me think it is a good description.

Related

Is creating "testable code" always consistent with following the best OOP design principles?

This is perhaps too general/subjective a question for StackOverflow, but I've been dying to ask it somewhere.
Maybe it's because I'm new to the software engineering world, but it seems like the buzzwords I've been hearing the past couple years are like
"testable code"
"test coverage"
"pure functions"
"every code path in your entire application covered by a test that is a pure in-memory test -- doesn't open database connections or anything. Then we'll know that the code we deploy is guaranteed to work" (yea, right lol)
and sometimes I find this hard to reconcile with the way I want to design my application.
For example, one thing that happens often is I have a complex algorithm inside one or more private methods
private static void DoFancyAlgorithm(string first, string second)
{
// ...
}
and I want or need it to have test coverage.
Well, since you're not supposed to directly test private methods, I have 3 options:
make the method accessible for test (maybe InternalsVisibleTo in C# or friend in C++)
move the logic to a separate class whose "single responsibility" is dealing with this logic, even though from an OOP perspective I believe the logic should be confined to the class it is currently inside, and test that separate class
leave the code as-is and spend 100+ hours figuring out to setup the scenario where I indirectly test the logic of the method. Sometimes this means creating ridiculous mock objects to inject in the class with.
So my question is:
Is creating "testable code" always consistent with the best OOP
practice or is there sometimes a tradeoff?
Creating a testable code has of course consequences on the applicative design.
So it happens that you may do some trade-off on the design but this is generally limited.
Besides unit testing of the component API focuses on input and output of the tested functions.
So I have difficulties to understand as you may finish in such a bad smell:
leave the code as-is and spend 100+ hours figuring out to setup the
scenario where I indirectly test the logic of the method. Sometimes
this means creating ridiculous mock objects to inject in the class
with.
In most of cases as a setup/understanding of the unit test scenarios consumes so much time, it very probably means that :
the component API is not clear.
and/or you mock too many things and so you should wonder whether you should not favor integration tests (without mocks or by limiting them) instead of unit tests (with mocks).
Ideally, in case of legacy code which is running on production, code refactoring to write new unit test cases is NOT the way to go.
Instead, it would be better to first write the unit test cases for the existing code and check-in. With the safety-net in place then onwards (that you have to make all the unit test cases pass at all steps), you can refactor your code (and test cases) in small steps. Goal of these refactoring steps should be to make the code follow the best OOP design principles.
Spending time in writing new unit test cases for a legacy codebase is the biggest disadvantage of working with legacy codebase.

Unit Testing Classes VS Methods

When unit testing, is it better practice to test a class or individual methods?
Most of the examples I've seen, test the class apart from other classes, mocking dependencies between classes. Another method I've played around w/ is mocking methods you're not testing (by overriding) so that you're only testing the code in one method. Thus 1 bug breaks 1 test since the methods are isolated from each other.
I was wondering if there is a standard method and if there are any big disadvantages to isolating each method for testing as opposed to isolating classes.
The phrase unit testing comes from hardware systems testing, and is more or less semantics-free when applied to software. It can get used for anything from isolation testing of a single routine to testing a complete system in headless mode with an in-memory database.
So don't trust anyone who argues that the definition implies there is only one way to do things independently of context; there a variety of ways, some of which are sometimes more useful than others. And presumably every approach a smart person would argue for has at least some value somewhere.
The smallest unit of hardware is the atom, or perhaps some subatomic particle. Some people test software like they were scanning each atom to see if the laws of quantum mechanics still held. Others take a battleship and see if it floats.
Something in between is very likely better. Once you know something about the kind of thing you are producing beyond 'it is software', you can start to come up with a plan that is appropriate to what you are supposed to be doing.
The point of unit testing is to test a unit of code i.e. class.
This gives you confidence that part of the code on its one is doing what is expected.
This is also the first part of the testing process. It helps to catch those pesky bugs as early as possible and having a unit test to demonstrate it makes it easier to fix that further down the line.
Unit testing by definition is testing the smallest piece of written code you can. "Units" are not classes they are methods.
Every public method should have at least 1 unit test, that tests that method specifically.
If you follow the rule above, you will eventually get to where class interactions are being covered. As long as you write 1 test per method, you will cover class interaction as well.
There is probably no one standard answer. Unit tests are for the developer (or they should be), do what is most helpful to you.
One downside of testing individual methods is you may not test the actual behavior of the object. If the mocking of some methods is not accurate that may go undetected. Also mocks are a lot of work, and they tend to make the tests very fragile, because they make the tests care a lot about what specific method calls take place.
In my own code I try whenever possible to separate infrastructure-type dependencies from business logic so that I can write tests of the business logic classes entirely without mocks. If you have a nasty legacy code base it probably makes more sense to test individual methods and mock any collaborating methods of the object, in order to insulate the parts from each other.
Theoretically objects are supposed to be cohesive so it would make sense to test them as a whole. In practice a lot of things are not particularly object-oriented. In some cases it is easier to mock collaborator methods than it is to mock injected dependencies that get called by the collaborators.

Is a class that is hard to unit test badly designed? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am now doing unit testing on an application which was written over the year, before I started to do unit-testing diligently. I realized that the classes I wrote are hard to unit test, for the following reasons:
Relies on loading data from database. Which means I have to setup a row in the table just to run the unit test (and I am not testing database capabilities).
Requires a lot of other external classes just to get the class I am testing to its initial state.
On the whole, there doesn't seem to be anything wrong with the design except that it is too tightly coupled (which by itself is a bad thing). I figure that if I have written automated test cases with each of the class, hence ensuring that I don't heap extra dependencies or coupling for the class to work, the class might be better designed.
Does this reason holds water? What are your experiences?
Yes you are right. A class which is not unit testable hard to unit test is (almost always) not well designed (there are exceptions, as always, but these are rare - IMHO one should better not try to explain the problem away this way). Lack of unit tests means that it is harder to maintain - you have no way of knowing whether you have broken existing functionality whenever you modify anything in it.
Moreover, if it is (co)dependent with the rest of the program, any changes in it may break things even in seemingly unrelated, far away parts of the code.
TDD is not simply a way to test your code - it is also a different way of design. Effectively using - and thinking about using - your own classes and interfaces from the very first moment may result in a very different design than the traditional way of "code and pray". One concrete result is that typically most of your critical code is insulated from the boundaries of your system, i.e. there are wrappers/adapters in place to hide e.g. the concrete DB from the rest of the system, and the "interesting" (i.e. testable) code is not within these wrappers - these are as simple as possible - but in the rest of the system.
Now, if you have a bunch of code without unit tests and want to cover it, you have a challenge. Mocking frameworks may help a lot, but still it is a pain in the ass to write unit tests for such a code. A good source of techniques to deal with such issues (commonly known as legacy code) is Working Effectively with Legacy Code, by Michael Feathers.
Yes, the design could be better with looser coupling, but ultimately you need data to test against.
You should look into Mocking frameworks to simulate the database and the other classes that this class relies on so you can have better control over the testing.
I've found that dependency injection is the design pattern that most helps make my code testable (and, often, also reusable and adaptable to contexts that are different from the original one that I had designed it for). My Java-using colleagues adore Guice; I mostly program in Python, so I generally do my dependency injection "manually", since duck typing makes it easy; but it's the right fundamental DP for either static or dynamic languages (don't let me get started on "monkey patching"... let's just say it's not my favorite;-).
Once your class is ready to accept its dependencies "from the outside", instead of having them hard-coded, you can of course use fake or mock versions of the dependencies to make testing easier and faster to run -- but this also opens up other possibilities. For example, if the state of the class as currently designed is complex and hard to set up, consider the State design pattern: you can refactor the design so that the state lives in a separate dependency (which you can set up and inject as desired) and the class itself is mostly responsible for behavior (updating the state).
Of course, by refactoring in this way, you'll be introducing more and more interfaces (abstract classes, if you're using C++) -- but that's perfectly all right: it's a excellent principle to "program to an interface, not an implementation".
So, to address your question directly, you're right: the difficulty in testing is definitely the design equivalent of what extreme programming calls a "code smell". On the plus side, though, there's a pretty clear path to refactor this problem away -- you don't have to have a perfect design to start with (fortunately!-), but can enhance it as you go. I'd recommend the book Refactoring to Patterns as good guidance to this purpose.
For me, code should be designed for testability. The other way around, I consider non-testable or hard to test code as badly designed (regardless of its beauty).
In your case, maybe you can mock external dependencies to run real unit tests (in isolation).
I'll take a different tack: the code just isn't designed for testability, but that does not mean its necessarily badly designed. A design is the product of competing *ilities, of which testability is only one of them. Every coding decision increases some of the *itilies while decreasing others. For example, designing for testability generally harms its simplicity/readability/understandability (because it adds complexiety). A good design favors the most important *ilities of your situation.
Your code isn't bad, it just maximizes the other *ilities other than testability. :-)
Update: Let me add this before I get accused of saying designing for *testability isn't important
The trick of course is to design and code to maximize the good *ilities, particularly the important ones. Which ones are the important ones depends on your situation. In my experience in my situations, designing for testability has been one of the more important *ilities.
Ideally, the large majority of your classes will be unit-testable, but you'll never get to 100% since you're bound to have at least one bit that is dedicated to binding the other classes together to an overall program. (It's best if that can be exactly one place that is obviously and trivially correct, but not all code is as pleasant to work with.)
While there isn't a way to establish if a class is "well designed" or not, at least the first thing you mention is usually a sign of a questionable design.
Instead of relying on the database, the class you are testing should have a dependency on an object whose only responsibility is getting that data, maybe using a pattern like Repository or DAO.
As for the second reason, it doesn't necessarily highlight a bad design of the class; it can be a problem with the design of the tests (not having a fixture supertype or helpers where you can setup the dependencies used in several tests) and/or the overall architecture (i.e. not using factories or inversion of control to inject the corresponding dependencies)
Also, you probably shouldn't be using "real" objects for your dependencies, but test doubles. This helps you make sure you are testing the behavior of that one class, and not that of its dependencies. I suggest you look into mocking frameworks.
I might have an ideal solution for you to consider... the Private Accessor
Recently I served in a role where we used them prolifically to avoid the very situation you're describing -- reliance upon artificial data maintained within the primary data-store.
While not the simplest technology to implement, after doing so you'll be able to easily hard-define private members in the method being tested to whatever conditions you feel they should possess, and right from the unit-test code (so no loading from the database). Also you'll have accomplished the goal without violating class protection levels.
Then it's basic assert & verification for desired conditions as normal.

Unit test adoption [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have tried to introduce unit testing to our current project but it doesn't seem to be working. The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
We have an abstract base class for unit testing our controllers that acts as a template calling into the child classes' abstract method implementations i.e. Framework calls Initialize so our controller classes all have their own Initialize method.
I used to be an advocate of unit testing but it doesn't seem to be working on our current project.
Can anyone help identify the problem and how we can make unit tests work for us rather than against us?
Tips:
Avoid writing procedural code
Tests can be a bear to maintain if they're written against procedural-style code that relies heavily on global state or lies deep in the body of an ugly method.
If you're writing code in an OO language, use OO constructs effectively to reduce this.
Avoid global state if at all possible.
Avoid statics as they tend to ripple through your codebase and eventually cause things to be static that shouldn't be. They also bloat your test context (see below).
Exploit polymorphism effectively to prevent excessive ifs and flags
Find what changes, encapsulate it and separate it from what stays the same.
There are choke points in code that change a lot more frequently than other pieces. Do this in your codebase and your tests will become more healthy.
Good encapsulation leads to good, loosely coupled designs.
Refactor and modularize.
Keep tests small and focused.
The larger the context surrounding a test, the more difficult it will be to maintain.
Do whatever you can to shrink tests and the surrounding context in which they are executed.
Use composed method refactoring to test smaller chunks of code.
Are you using a newer testing framework like TestNG or JUnit4?
They allow you to remove duplication in tests by providing you with more fine-grained hooks into the test lifecycle.
Investigate using test doubles (mocks, fakes, stubs) to reduce the size of the test context.
Investigate the Test Data Builder pattern.
Remove duplication from tests, but make sure they retain focus.
You probably won't be able to remove all duplication, but still try to remove it where it's causing pain. Make sure you don't remove so much duplication that someone can't come in and tell what the test does at a glance. (See Paul Wheaton's "Evil Unit Tests" article for an alternative explanation of the same concept.)
No one will want to fix a test if they can't figure out what it's doing.
Follow the Arrange, Act, Assert Pattern.
Use only one assertion per test.
Test at the right level to what you're trying to verify.
Think about the complexity involved in a record-and-playback Selenium test and what could change under you versus testing a single method.
Isolate dependencies from one another.
Use dependency injection/inversion of control.
Use test doubles to initialize an object for testing, and make sure you're testing single units of code in isolation.
Make sure you're writing relevant tests
"Spring the Trap" by introducing a bug on purpose and make sure it gets caught by a test.
See also: Integration Tests Are A Scam
Know when to use State Based vs Interaction Based Testing
True unit tests need true isolation. Unit tests don't hit a database or open sockets. Stop at mocking these interactions. Verify you talk to your collaborators correctly, not that the proper result from this method call was "42".
Demonstrate Test-Driving Code
It's up for debate whether or not a given team will take to test-driving all code, or writing "tests first" for every line of code. But should they write at least some tests first? Absolutely. There are scenarios in which test-first is undoubtedly the best way to approach a problem.
Try this exercise: TDD as if you meant it (Another Description)
See also: Test Driven Development and the Scientific Method
Resources:
Test Driven by Lasse Koskela
Growing OO Software, Guided by Tests by Steve Freeman and Nat Pryce
Working Effectively with Legacy Code by Michael Feathers
Specification By Example by Gojko Adzic
Blogs to check out: Jay Fields, Andy Glover, Nat Pryce
As mentioned in other answers already:
XUnit Patterns
Test Smells
Google Testing Blog
"OO Design for Testability" by Miskov Hevery
"Evil Unit Tests" by Paul Wheaton
"Integration Tests Are A Scam" by J.B. Rainsberger
"The Economics of Software Design" by J.B. Rainsberger
"Test Driven Development and the Scientific Method" by Rick Mugridge
"TDD as if you Meant it" exercise originally by Keith Braithwaite, also workshopped by Gojko Adzic
Are you testing small enough units of code? You shouldn't see too many changes unless you are fundamentally changing everything in your core code.
Once things are stable, you will appreciate the unit tests more, but even now your tests are highlighting the extent to which changes to your framework are propogated through.
It is worth it, stick with it as best you can.
Without more information it's hard to make a decent stab at why you're suffering these problems. Sometimes it's inevitable that changing interfaces etc. will break a lot of things, other times it's down to design problems.
It's a good idea to try and categorise the failures you're seeing. What sort of problems are you having? E.g. is it test maintenance (as in making them compile after refactoring!) due to API changes, or is it down to the behaviour of the API changing? If you can see a pattern, then you can try to change the design of the production code, or better insulate the tests from changing.
If changing a handful of things causes untold devastation to your test suite in many places, there are a few things you can do (most of these are just common unit testing tips):
Develop small units of code and test
small units of code. Extract
interfaces or base classes where it
makes sense so that units of code
have 'seams' in them. The more
dependencies you have to pull in (or
worse, instantiate inside the class
using 'new'), the more exposed to
change your code will be. If each
unit of code has a handful of
dependencies (sometimes a couple or
none at all) then it is better
insulated from change.
Only ever assert on what the test
needs. Don't assert on intermediate,
incidental or unrelated state. Design by
contract and test by contract (e.g.
if you're testing a stack pop method,
don't test the count property after
pushing -- that should be in a
separate test).
I see this problem
quite a bit, especially if each test
is a variant. If any of that
incidental state changes, it breaks
everything that asserts on it
(whether the asserts are needed or
not).
Just as with normal code, use factories and builders
in your unit tests. I learned that one when about 40 tests
needed a constructor call updated after an API change...
Just as importantly, use the front
door first. Your tests should always
use normal state if it's available. Only used interaction based testing when you have to (i.e. no state to verify against).
Anyway the gist of this is that I'd try to find out why/where the tests are breaking and go from there. Do your best to insulate yourself from change.
One of the benefits of unit testing is that when you make changes like this you can prove that you're not breaking your code. You do have to keep your tests in sync with your framework, but this rather mundane work is a lot easier than trying to figure out what broke when you refactored.
I would insists you to stick with the TDD. Try to check your Unit Testing framework do one RCA (Root Cause Analysis) with your team and identify the area.
Fix the unit testing code at suite level and do not change your code frequently specially the function names or other modules.
Would appreciate if you can share your case study well, then we can dig out more at the problem area?
Good question!
Designing good unit tests is hard as designing the software itself. This is rarely acknowledged by developers, so the result is often hastily-written unit tests that require maintenance whenever the system under test changes. So, part of the solution to your problem could be spending more time to improve the design of your unit tests.
I can recommend one great book that deserves its billing as The Design Patterns of Unit-Testing
HTH
If the problem is that your tests are getting out of date with the actual code, you could do one or both of:
Train all developers to not pass code reviews that don't update unit tests.
Set up an automatic test box that runs the full set of units tests after every check-in and emails those who break the build. (We used to think that that was just for the "big boys" but we used an open source package on a dedicated box.)
Well if the logic has changed in the code, and you have written tests for those pieces of code, I would assume the tests would need to be changed to check the new logic. Unit tests are supposed to be fairly simple code that tests the logic of your code.
Your unit tests are doing what they are supposed to do. Bring to the surface any breaks in behavior due to changes in the framework, immediate code or other external sources. What this is supposed to do is help you determine if the behavior did change and the unit tests need to be modified accordingly, or if a bug was introduced thus causing the unit test to fail and needs to be corrected.
Don't give up, while its frustrating now, the benefit will be realized.
I'm not sure about the specific issues that make it difficult to maintain tests for your code, but I can share some of my own experiences when I had similar issues with my tests breaking. I ultimately learned that the lack of testability was largely due to some design issues with the class under test:
Using concrete classes instead of interfaces
Using singletons
Calling lots of static methods for business logic and data access instead of interface methods
Because of this, I found that usually my tests were breaking - not because of a change in the class under test - but due to changes in other classes that the class under test was calling. In general, refactoring classes to ask for their data dependencies and testing with mock objects (EasyMock et al for Java) makes the testing much more focused and maintainable. I've really enjoyed some sites in particular on this topic:
Google testing blog
The guide to writing testable code
Why should you have to change your unit tests every time you make changes to your framework? Shouldn't this be the other way around?
If you're using TDD, then you should first decide that your tests are testing the wrong behavior, and that they should instead verify that the desired behavior exists. Now that you've fixed your tests, your tests fail, and you have to go squish the bugs in your framework until your tests pass again.
Everything comes with price of course. At this early stage of development it's normal that a lot of unit tests have to be changed.
You might want to review some bits of your code to do more encapsulation, create less dependencies etc.
When you near production date, you'll be happy you have those tests, trust me :)
Aren't your unit tests too black-box oriented ? I mean ... let me take an example : suppose you are unit testing some sort of container, do you use the get() method of the container to verify a new item was actually stored, or do you manage to get an handle to the actual storage to retrieve the item directly where it is stored ? The later makes brittle tests : when you change the implementation, you're breaking the tests.
You should test against the interfaces, not the internal implementation.
And when you change the framework you'd better off trying to change the tests first, and then the framework.
I would suggest investing into a test automation tool. If you are using continuous integration you can make it work in tandem. There are tools aout there which will scan your codebase and will generate tests for you. Then will run them. Downside of this approach is that its too generic. Because in many cases unit test's purpose is to break the system.
I have written numerous tests and yes I have to change them if the codebase changes.
There is a fine line with automation tool you would definatelly have better code coverage.
However, with a well wrttien develper based tests you will test system integrity as well.
Hope this helps.
If your code is really hard to test and the test code breaks or requires much effort to keep in sync, then you have a bigger problem.
Consider using the extract-method refactoring to yank out small blocks of code that do one thing and only one thing; without dependencies and write your tests to those small methods.
The extra code seems to have become a maintenance headache as when our internal Framework changes we have to go around and fix any unit tests that hang off it.
The alternative is that when your Framework changes, you don't test the changes. Or you don't test the Framework at all. Is that what you want?
You may try refactoring your Framework so that it is composed from smaller pieces that can be tested independently. Then when your Framework changes, you hope that either (a) fewer pieces change or (b) the changes are mostly in the ways in which the pieces are composed. Either way will get you better reuse of both code and tests. But real intellectual effort is involved; don't expect it to be easy.
I found that unless you use IoC / DI methodology that encourages writing very small classes, and follow Single Responsibility Principle religiously, the unit-tests end up testing interaction of multiple classes which makes them very complex and therefore fragile.
My point is, many of the novel software development techniques only work when used together. Particularly MVC, ORM, IoC, unit-testing and Mocking. The DDD (in the modern primitive sense) and TDD/BDD are more independent so you may use them or not.
Sometime designing the TDD tests launch questioning on the design of the application itself. Check if your classes have been well designed, your methods are only performing one thing a the time ... With good design it should be simple to write code to test simple method and classes.
I have been thinking about this topic myself. I'm very sold on the value of unit tests, but not on strict TDD. It seems to me that, up to a certain point, you may be doing exploratory programming where the way you have things divided up into classes/interfaces is going to need to change. If you've invested a lot of time in unit tests for the old class structure, that's increased inertia against refactoring, painful to discard that additional code, etc.

Mocks or real classes? [duplicate]

This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.