How are Mocks meant to be used? - unit-testing

When I originally was introduced to Mocks I felt the primary purpose was to mock up objects that come from external sources of data. This way I did not have to maintain an automated unit testing test database, I could just fake it.
But now I am starting to think of it differently. I am wondering if Mocks are more effective used to completely isolate the tested method from anything outside of itself. The image that keeps coming to mind is the backdrop you use when painting. You want to keep the paint from getting all over everything. I am only testing that method, and I only want to know how it reacts to these faked up external factors?
It seems incredibly tedious to do it this way but the advantage I am seeing is when the test fails it is because it is screwed up and not 16 layers down. But now I have to have 16 tests to get the same testing coverage because each piece would be tested in isolation. Plus each test becomes more complicated and more deeply tied to the method it is testing.
It feels right to me but it also seems brutal so I kind of want to know what others think.

I recommend you take a look at Martin Fowler's article Mocks Aren't Stubs for a more authoritative treatment of Mocks than I can give you.
The purpose of mocks is to unit test your code in isolation of dependencies so you can truly test a piece of code at the "unit" level. The code under test is the real deal, and every other piece of code it relies on (via parameters or dependency injection, etc) is a "Mock" (an empty implementation that always returns expected values when one of its methods is called.)
Mocks may seem tedious at first, but they make Unit Testing far easier and more robust once you get the hang of using them. Most languages have Mock libraries which make mocking relatively trivial. If you are using Java, I'll recommend my personal favorite: EasyMock.
Let me finish with this thought: you need integration tests too, but having a good volume of unit tests helps you find out which component contains a bug, when one exists.

Don't go down the dark path Master Luke. :) Don't mock everything. You could but you shouldn't... here's why.
If you continue to test each method in isolation, you have surprises and work cut out for you when you bring them all together ala the BIG BANG. We build objects so that they can work together to solve a bigger problem.. By themselves they are insignificant. You need to know if all the collaborators are working as expected.
Mocks make tests brittle by introducing duplication - Yes I know that sounds alarming. For every mock expect you setup, there are n places where your method signature exists. The actual code and your mock expectations (in multiple tests). Changing actual code is easier... updating all the mock expectations is tedious.
Your test is now privy to insider implementation information. So your test depends on how you chose to implement the solution... bad. Tests should be a independent spec that can be met by multiple solutions. I should have the freedom to just press delete on a block of code and reimplement without having to rewrite the test suite.. coz the requirements still stay the same.
To close, I'll say "If it quacks like a duck, walks like a duck, then it probably is a duck" - If it feels wrong.. it probably is. *Use mocks to abstract out problem children like IO operations, databases, third party components and the like.. Like salt, some of it is necessary.. too much and :x *
This is the holy war of State based vs Iteraction based testing.. Googling will give you deeper insight.
Clarification: I'm hitting some resistance w.r.t. integration tests here :) So to clarify my stand..
Mocks do not figure in the 'Acceptance tests'/Integration realm. You'll only find them in the Unit Testing world.. and that is my focus here.
Acceptance tests are different and are very much needed - not belittling them. But Unit tests and Acceptance tests are different and should be kept different.
All collaborators within a component or package do not need to be isolated from each other.. Like micro-optimization that is Overkill. They exist to solve a problem together.. cohesion.

Yes, I agree. I see mocking as sometimes painful, but often necessary, for your tests to truly become unit tests, i.e. only the smallest unit that you can make your test concerned with is under test. This allows you to eliminate any other factors that could potentially affect the outcome of the test. You do end up with a lot more small tests, but it becomes so much easier to work out where a problem is with your code.

My philosophy is that you should write testable code to fit the tests,
not write tests to fit the code.
As for complexity, my opinion is that tests should be simple to write, simply because you write more tests if they are.
I might agree that could be a good idea if the classes you're mocking doesn't have a test suite, because if they did have a proper test suite, you would know where the problem is without isolation.
Most of them time I've had use for mock objects is when the code I'm writing tests for is so tightly coupled (read: bad design), that I have to write mock objects when classes they depend on is not available. Sure there are valid uses for mock objects, but if your code requires their usage, I would take another look at the design.

Yes, that is the downside of testing with mocks. There is a lot of work that you need to put in that it feels brutal. But that is the essence of unit testing. How can you test something in isolation if you don't mock external resources?
On the other hand, you're mocking away slow functionality (such as databases and i/o operations). If the tests run faster then that will keep programmers happy. There is nothing much more painful than waiting for really slow tests, that take more than 10 seconds to finish running, while you're trying to implement one feature.
If every developer in your project spent time writing unit tests, then those 16 layers (of indirection) wouldn't be that much of a problem. Hopefully you should have that test coverage from the beginning, right? :)
Also, don't forget to write a function/integration test between objects in collaboration. Or else you might miss something out. These tests won't need to be run often, but are still important.

On one scale, yes, mocks are meant to be used to simulate external data sources such as a database or a web service. On a more finely grained scale however if you're designing loosely coupled code then you can draw lines throughout your code almost arbitrarily as to what might at any point be an 'outside system'. Take a project I'm working on currently:
When someone attempts to check in, the CheckInUi sends a CheckInInfo object to a CheckInMediator object which validates it using a CheckInValidator, then if it is ok, it fills a domain object named Transaction with CheckInInfo using CheckInInfoAdapter then passes the Transaction to an instance of ITransactionDao.SaveTransaction() for persistence.
I am right now writing some automated integration tests and obviously the CheckInUi and ITransactionDao are windows unto external systems and they're the ones which should be mocked. However, whose to say that at some point CheckInValidator won't be making a call to a web service? That is why when you write unit tests you assume that everything other than the specific functionality of your class is an external system. Therefore in my unit test of CheckInMediator I mock out all the objects that it talks to.
EDIT: Gishu is technically correct, not everything needs to be mocked, I don't for example mock CheckInInfo since it is simply a container for data. However anything that you could ever see as an external service (and it is almost anything that transforms data or has side-effects) should be mocked.
An analogy that I like is to think of a properly loosely coupled design as a field with people standing around it playing a game of catch. When someone is passed the ball he might throw a completely different ball to the next person, he might even throw a multiple balls in succession to different people or throw a ball and wait to receive it back before throwing it to yet another person. It is a strange game.
Now as their coach and manager, you of course want to check how your team works as a whole so you have team practice (integration tests) but you also have each player practice on his own against backstops and ball-pitching machines (unit tests with mocks). The only piece that this picture is missing is mock expectations and so we have our balls smeared with black tar so they stain the backstop when they hit it. Each backstop has a 'target area' that the person is aiming for and if at the end of a practice run there is no black mark within the target area you know that something is wrong and the person needs his technique tuned.
Really take the time to learn it properly, the day I understood Mocks was a huge a-ha moment. Combine it with an inversion of control container and I'm never going back.
On a side note, one of our IT people just came in and gave me a free laptop!

As someone said before, if you mock everything to isolate more granular than the class you are testing, you give up enforcing cohesion in you code that is under test.
Keep in mind that mocking has a fundamental advantage, behavior verification. This is something that stubs don't provide and is the other reason that makes the test more brittle (but can improve code coverage).

Mocks were invented in part to answer the question: How would you unit test objects if they had no getters or setters?
These days, recommended practice is to mock roles not objects. Use Mocks as a design tool to talk about collaboration and separation of responsibilities, and not as "smart stubs".

Mock objects are 1) often used as a means to isolate the code under test, BUT 2) as keithb already pointed out, are important to "focus on the relationships between collaborating objects". This article gives some insights and history related to the subject: Responsibility Driven Design with Mock Objects.

Related

Black Box Unit Testing

In my last project, we had Unit Testing with almost 100% cc, and as a result we almost didn’t have any bugs.
However, since Unit Testing must be White Box (you have to mock inner functions to get the result you want, so your tests need to know about the inner structure of your code) any time we changed the implementation of a function, we had to change the tests as well.
Note that we didn't change the logic of the functions, just the implementation.
It was very time-consuming and it felt as if we are working the wrong way.
Since we used all proper OOP guidelines (specifically Encapsulation), every time we changed the implementation we didn't have to change the rest of our code, but had to change the unit tests.
It felt as if we are serving the tests, instead of them serving us.
To prevent this, some of us argued that unit tests should be Black Box Testing.
That would be possible if we create one big mock of our entire Domain and create a stub for every function in every class in one place, and use it in every unit test.
Of course that if a specific test needs specific inner function to be called (Like making sure we write to the DB), we can override our stub.
So, every time we change the implementation of a function (like adding or replacing a call to a help function) we will only need to change our main big mock. Even if we do need to change some unit tests, it will still be much less than before.
Others argue that unit tests must be White Box, since not only do you want to make sure your app writes to the DB in a specific place, you want to make sure your app does not write to the DB anywhere else unless you specifically expect it to. While this is a valid point, I don't think it worth the time of writing White Box tests instead of Black Box tests.
So in conclusion, two questions:
What do you think about the concept of Black Box Unit Testing?
What do you think about the way we want to implement that concept? Do you have better ideas?
You need different types of tests.
Unit-tests which should be white-box testing, as you did
Integration tests (or system tests) which test the ability to use the actual implementations of your system and its communication with external layers (external systems, database, etc.) which should be black-box styled, but each one for a specific feature (CRUD tests for example)
Acceptance tests which should be completely black-box and are driven by functional requirements (as your users would phrase them). End-to-end as much as possible, and not knowing the internal of your chosen implementations. The textbook definition of black-box tests.
And remember code coverage is meaningless in most of the cases. You need a high lines coverage (or methods coverage, whatever your counting method is), but that's usually not sufficient. The concept you need to think about is functional coverage: making sure all your requirements and logical paths are covered.
and as a result we almost didn’t have any bugs
If you were really able to achieve this, then I don't think you should change anything.
Black box testing might sound appealing on paper, but truth is you almost always need to know parts of inner workings of a tested class. The provide input, verify output in reality works only for simple cases. Most of the times your tests need to have at least some knowledge of tested method - how it interacts with external collaborators, what methods it calls, in what order and so forth.
Whole idea behind mocking and SOLID design is to avoid situation where dependency implementation change causes other class test changes/failures. On contrary, if you change implementation details of tested method, so should change implementation details of it tests. That's nothing too uncommon.
Overall, if you were really able to achieve almost no bugs, then I would stick to that approach.
tl;dr version:
Black Box unit testing is exactly how unit testing should be done.
Black Box unit testing is exactly how unit testing should be done. Proper TDD practice does exactly this.
Full version.
There is absolutely no need in testing private methods of the objects. It'll have no impact on code coverage, also.
When you TDD a class, you write tests that check the behavior of that class. Behavior is expressed through the public methods of that class. You should never bother with how that methods are really implemented. Google people described that a lot better than I will ever be able to: http://googletesting.blogspot.ru/2013/08/testing-on-toilet-test-behavior-not.html
If you do the usual mistake and statically depend on other entity classes or worse, on classes from the different layer of application, it's inevitable that you will find yourself in a situation when you need to check a lot of things in your test and prepare a lot of stuff for it. For solving this the Dependency Injection principle and the Law of Demeter exist.
I think you should continue writing unit tests - just make them less fragile.
Unit tests should be low level but should test the result and not how things done. When implementation change cause a lot of test change it means that instead of testing requirements you're actually testing implementation.
There are several rules of the thumb - such as "don't test private methods" and use mock objects.
Mocking/simulating the entire domain usually result in the opposite of what you're trying to accomplish - when the code behavior change you need to update the tests to make sure that your "simulated objects" behaves the same - it becomes really hard really fast as the complexity of the project increase.
I suggest that you continue writing unit tests - just learn how to make them more robust and less fragile.
"as a result we almost didn’t have any bugs" -- so keep it that way.
Sole cause of frustration is necessity to maintain unit tests, which actually is not such a bad thing (alternative is much worse). Just make them more maintainable. "The art of Unit Testing" by Roy Osherove gave me a good start in this way.
So
1) Not an option. (The idea itself contradicts principles of TDD, for instance)
2) You'll have much more maintenance troubles with such approach. Unit testing philosophy is to chop out SUT from other system and test it using stubs as input and mocks as output (signals?) simulating real life situations (or mb I just dont catch the "one big mock of our entire Domain" idea).
For detailed information about black, white and grey box and decision tables refer to the following article, which explains everything.
Testing Web-based applications: The state of the art and future trends (PDF)

Goal of unit testing and TDD: find/minimize bugs or improve design?

I'm fairly green to unit testing and TDD, so please bear with me as I ask what some may consider newbie questions, or if this has been debated before. If this turns out to be considered a "bad question" (too subjective and open for debate), I will happily close it. However, I've searched for a couple days, and am not getting a definitive answer, and I need a better understand of this, so I know no better way to get more info than to post here.
I've started reading an older book on unit testing (because a colleague had it on hand), and its opening chapter talks about why to unit test. One of the points it makes is that in the long run, your code is much more reliable and cleaner, and less prone to bugs. It also points out that effective unit testing will make tracking and fixing bugs much easier. So it seems to focus quite a bit on the overall prevention/reduction of bugs in your code.
On the other hand, I also found an article about writing great unit tests, and it states that the goal of unit testing is to make your design more robust, and conversely, finding bugs is the goal of manual testing, not unit testing.
So being the newbie to TDD that I am, I'm a little confused as to the state of mind with which I should go into TDD and building my unit tests. I'll admit that part of the reason I'm taking this on now with my recently started project is because I'm tired of my changes breaking previously existing code. And admittedly, the linked article above does at least point this out as an advantage to TDD. But my hope is that by going back in and adding unit tests to my existing code (and then continuing TDD from this point forward) is to help prevent these bugs in the first place.
Are this book and this article really saying the same thing in different tones, or is there some subjectivity on this subject, and what I'm seeing is just two people having somewhat different views on how to approach TDD?
Thanks in advance.
Unit tests and automated tests generally are for both better design and verified code.
Unit test should test some execution path in some very small unit. This unit is usually public method or internal method exposed on your object. The method itself can still use many other protected or private methods from the same object instance. You can have single method and several unit test for this method to test different execution paths. (By execution path I meant something controlled by if, switch, etc.) Writing unit tests this way will validate that your code really does what you expect. This can be especially important in some corner cases where you expect to throw exception in some rare scenarios etc. You can also test how method behaves if you pass different parameters - for example null instead of object instance, negative value for integer used for indexing, etc. That is especially useful for public API.
Now suppose that your tested method also uses instances of other classes. How to deal with it? Should you still test your single method and believe that class works? What if the class is not implemented yet? What if the class has some complex logic inside? Should you test these execution paths as well on your current method? There are two approaches to deal with this:
For some cases you will simply let the real class instance to be tested together with your method. This is for example very common in case of logging (it is not bad to have logs available for test as well).
For other scenarios you would like to take this dependencies from your method but how to do it? The solution is dependency injection and implementing against abstraction instead of implementation. What does it mean? It means that your method / class will not create instances of these dependencies but instead it will get them either through method parameters, class constructor or class properties. It also means that you will not expect concrete implementation but either abstract base class or interface. This will allow you to pass fake, dummy or mock implementation to your tested object. These special type of implementations simply don't do any processing they get some data and return expected result. This will allow you to test your method without dependencies and lead to much better and more extensible design.
What is the disadvantage? Once you start using fakes / mocks you are testing single method / class but you don't have a test which will grab all real implementations and put them together to test if the whole system really works = You can have thousands of unit tests and validate that each your method works but it doesn't mean they will work together. This is scenario for more complex tests - integration or end-to-end tests.
Unit tests should be usually very easy to write - if they are not it means that your design is probably complicated and you should think about refactoring. They should be also very fast to execute so you can run them very often. Other kinds of test can be more complex and very slow and they should run mostly on build server.
How it fits with SW development process? The worst part of development process is stabilization and bug fixing because this part can be very hardly estimated. To be able to estimate how much time bug fixing takes you must know what causes the bug. But this investigation cannot be estimated. You can have bug which will take one hour to fix but you will spend two weeks by debugging your application and searching for this bug. When using good code coverage you will most probably find such bug early during development.
Automated testing don't say that SW doesn't contain bugs. It only say that you did your best to find and solve them during development and because of that your stabilization could be much less painful and much shorter. It also doesn't say that your SW does what it should - that is more about application logic itself which must be tested by some separate tests going through each use case / user story - acceptance tests (they can be also automated).
How this fit with TDD? TDD takes it to extreme because in TDD you will write your test first to drive your quality, code coverage and design.
It's a false choice. "Find/minimize bugs" OR improve design.
TDD, in particular (and as opposed to "just" unit testing) is all about giving you better design.
And when your design is better, what are the consequences?
Your code is easier to read
Your code is easier to understand
Your code is easier to test
Your code is easier to reuse
Your code is easier to debug
Your code has fewer bugs in the first place
With well-designed code, you spend less time finding and fixing bugs, and more time adding features and polish. So TDD gives you a savings on bugs and bug-hunting, by giving you better design. These things are not separate; they are dependent and interrelated.
There can many different reasons why you might want to test your code. Personally, I test for a number of reasons:
I usually design API using a combination of the normal design patterns (top-down) and test-driven development (TDD; bottom-up) to ensure that I have a sound API both from a best practices point-of-view as well as from an actual usage point-of-view. The focus of the tests is both on the major use-cases for the API, but also on the completeness of the API and the behavior - so they are primary "black box" tests. The development sequence is often:
main API based on design patterns and "gut feeling"
TDD tests for the major use-cases according to the high-level specification for the API - primary in order to make sure the API is "natural" and easy to use
fleshed out API and behavior
all the needed test cases to ensure the completeness and correct behavior
Whenever I fix an error in my code, I try to write a test to make sure it stay fixed. Somehow, the error got into my original design and passed my original testing of the code, so it is probably not all that trivial. I have noticed that many of the tests tests are "write box" tests.
In order to be able to make any sort of major re-factoring of the code, you need an extensive set of API tests to make sure the behavior of the code stays the same after the re-factoring. For any non-trivial API, I want the test suite to be in place and working for a long time before the re-factoring to be sure that all the major use-cases are covered in a good way. As often as not, you are forced to throw away most of your "white box" tests as they - by the very definition - makes too many assumptions about the internals. I usually try to "translate" as many as possible of these tests as the same non-trivial problems tend to survive re-factoring of the code.
In order to transfer any code between developers, I usually also want a good test suite with focus on the API and the major use-cases. So basically the tests from the initial TDD...
I think that answer to your question is: both.
You will improve design because there is one particular thing about TDD that is great: while you write tests you put yourself in the position of the client code that will be using the system under test - and this alone makes you think about certain design choices.
For example: UI. When you start writing the tests, you will see that those God-Forms are impossible to test, so you separate the logic behind the screens to a presenter/controller, and you get MVP/MVC/whatever.
Having the concept of unit testing a class and mocking dependencies brings you to Single Responsibility Principle. There is a point about every of SOLID principles.
As for bugs, well, if you unit test every method of every class you write (except properties, very simple methods and such) you will catch most bugs in the start. Write the integration tests, you cover almost all of them.
I'll take my stab at this using a remix of a previous answer I wrote. In short, I don't see this as a dichotomy between driving good design and minimizing bugs. I see it more as one (good design) leading to the other (minimizing bugs).
I tend towards saying TDD is a design process that happens to involve unit testing. It's a design process because within each Red-Green-Refactor iteration, you write the test first for code that doesn't exist. You're designing as you're going.
The first beauty of TDD is that the design of your code is guaranteed to be testable. Testable code tends to have loose coupling and high cohesion. Loose coupling and high cohesion are important because they make the code easy to change when requirements change. The second beauty of TDD is that after you're done implementing your system, you happen to have a huge regression suite to catch any bugs and changes in assumptions. Thus, TDD makes your code easy to change because of the design it creates and it makes your code safe to change because of the test harness it creates.
Trying to retrospectively add Unit tests can be quite painful and expensive. If the code doesn't support Unit test you may be better looking at integration tests to test your code.
Don't mix Unit Testing with TDD.
Unit Testing is just the fact of "testing" your code to ensure quality and maintainability.
TDD is a full blown development methodology in which you first write your tests (based on requirements), and only then you write the needed code (and just the needed code) to make that test pass. This means that you only write code to repair a broken test.
Once done that, you write another test, and the code needed to make it pass. In the way, you may be forced to do "refactoring" of the code to allow a new test run without braking another. This way, the "design" arises from the tests.
The purpose of this methodology is of course reduce bugs and improve design, but the main goal of it is to improve productivity because you write exactly the code you need. And you don't write documentation: the tests are the documentation. If a requirement changes, then you change the tests and the code afterwards. If new requirements appear, just add new tests.

BDD and functional tests

I am starting to buy into BDD. Basically, as I understand it, you write scenario that describes well acceptance criteria for certain story. You start by simple tests, from the outside-in, using mocks in place of classes you do not implement as yet. As you progress, you should replace mocks with real classes. From Introduction to BDD:
At first, the fragments are
implemented using mocks to set an
account to be in credit or a card to
be valid. These form the starting
points for implementing behaviour. As
you implement the application, the
givens and outcomes are changed to use
the actual classes you have
implemented, so that by the time the
scenario is completed, they have
become proper end-to-end functional
tests.
My question is: When you finish implementing a scenario, should all classes you use be real, like in integration tests? For example, if you use DB, should your code write to a real (but lightweight in-memory) DB? In the end, should you have any mocks in your end-to-end tests?
Well, it depends :-) As I understand, the tests produced by BDD are still unit tests, so you should use mocks to eliminate dependency on external factors like DB.
In full fledged integration / functionality tests, however, you should obviously test against the whole production system, without any mocks.
Integration tests might contain stubs/mocks to fake the code/components outside the modules that you are integrating.
However, IMHO the end-to-end test should mean no stubs/mocks along the way but production code only. Or in other words - if mocks are present - it is not really end-to-end test.
Yes, by the time a scenario runs, ideally all your classes will be real. A scenario exercises the behaviour from a user's point of view, so the system should be as a user would see it.
In the early days of BDD we used to start with mocks in the scenarios. I don't bother with this any more, because it's a lot of work to keep mocking as you go down the levels. Instead I will sometimes do things like hard-code data or behaviour if it lets me get feedback from the stakeholders more quickly.
We still keep mocks in the unit tests though.
For things like databases, sure, you can use an in-memory DB or whatever helps you get feedback faster. At some point you should probably run your scenarios on a system that's as close to production as possible. If this is too slow, you might do it overnight instead of as part of your regular build cycle.
As for what you "should" do, writing the right code is far more tricky than writing the code right. I worry about using my scenarios to get feedback from the stakeholders and users before I worry about how close my environment is to a production environment. When you get to the point where changes are deployed every couple of weeks, sure, then you probably want more certainty that you're not introducing any bugs.
Good luck!
I agree with Peter and ratkok. I think you keep the mocks forever, so you always have unit tests.
Separately, it is appropriate to additionally have integration tests (no mocks, use a DB, etc. etc.).
You may even find in-betweens helpful at times (mock one piece of depended-on code (DOC), but not another).
I've only recently been looking into BDD and in particular jBehave. I work in fairly large enterprises with a lot of waterfall, ceremony orientated people. I'm looking at BDD as a way to take the businesses use cases and turn then directly into tests which the developer can then turn into either unit test or integration tests.
BDD seems to me to be not just a way to help drive the developers understanding of what the business wants, but also a way to ensure as much a spossible that those requirements are accurately represented.
My view that if you are dealing with mocks then you are doing unit tests. You need both unit testing to test out the details of a classes operation, and integrations to test out that the class plays well with others. I find developers often get infused between the two, but it's best to be as clear as possible and keep there separate from each other.

Do setup/teardown hurt test maintainability?

This seemed to spark a bit of conversation on another question and I
thought it worthy to spin into its own question.
The DRY principle seems to be our weapon-of-choice for fighting maintenance
problems, but what about the maintenance of test code? Do the same rules of thumb
apply?
A few strong voices in the developer testing community are of the opinion that
setup and teardown are harmful and should be avoided... to name a few:
James Newkirk
Jay Fields, [2]
In fact, xUnit.net has removed them from the framework altogether for this very reason
(though there are ways to get around this self-imposed limitation).
What has been your experience? Do setup/teardown hurt or help test maintainability?
UPDATE: do more fine-grained constructs like those available in JUnit4 or TestNG (#BeforeClass, #BeforeGroups, etc.) make a difference?
The majority (if not all) of valid uses for setup and teardown methods can be written as factory methods which allows for DRY without getting into issues that seem to be plagued with the setup/teardown paradigm.
If you're implementing the teardown, typically this means you're not doing a unit test, but rather an integration test. A lot of people use this as a reason to not have a teardown, but IMO there should be both integration and unit test. I would personally separate them into separate assemblies, but I think a good testing framework should be able to support both types of testing. Not all good testing is going to be unit testing.
However, with the setup there seems to be a number of reasons why you need to do things before a test is actually run. For example, construction of object state to prep for the test (for instance setting up a Dependency Injection framework). This is a valid reason for a setup, but could just as easily be done with a factory.
Also, there is a distinction between class and method level setup/teardown. That needs to be kept in mind when considering what you're trying to do.
My biggest problem that I have had with using the setup/teardown paradigm is that my tests don't always follow the same pattern. This has brought me into using factory patterns instead, which allows me to have DRY while at the same time being readable and not at all confusing to other developers. Going the factory route, I've been able to have my cake and eat it to.
They've really helped with our test maintainability. Our "unit" tests are actually full end-to-end integration tests that write to the DB and check the results. Not my fault, they were like that when I got here, and I'm working to change things.
Anyway, if one test failed, it went on to the next one, trying to enter the same user from the first test in the DB, violating a uniqueness constraint, and the failures just cascaded from there. Moving the user creation/deletion into the [Fixture][SetUp|TearDown] methods allowed us to see the one test that failed without everything going haywire, and made my life a lot easier and less stabby.
I think the DRY principle applies just as much for tests as it does for code, however its application is different. In code you go to much greater lengths to literally not do the same thing in two different parts of the code. In tests the need to do that (do a lot of the same setup) is certainly a smell, but the solution is not necessarily to factor out the duplication into a setup method. It may be make the state easier to set up in the class itself or to isolate the code under test so it is less dependent on this amount of state to be meaningful.
Given the general goal of only testing one thing per test, it really isn't possible to avoid doing a lot of the same thing over and over again in certain cases (such as creating an object of a certain type). If you find you have a lot of that, it may be worth rethinking the test approach, such as introducing parametrized tests and the like.
I think setup and teardown should be primarily for establishing the environment (such as injections to make the environment a test one rather than a production one), and should not contain steps that are part and parcel of the test.
I agree with everything Joseph has to say, especially the part about tearDown being a sign of writing integration tests (and 99% of the time is what I've used it for), but in addition to that I'd say that the use of setup is a good indicator of when tests should be logically grouped together and when they should be split into multiple test classes.
I have no problem with large setup methods when applying tests to legacy code, but the setup should be common to every test in the suite. When you find yourself having the setup method really doing multiple bits of setup, then it's time to split your tests into multiple cases.
Following the examples in "Test Driven", the setup method comes about from removing duplication in the test cases.
I use setup quite frequently in Java and Python, frequently to set up collaborators (either real or test, depending). If the object under test has no constructors or just the collaborators as constructors I will create the object. For a simple value class I usually don't bother with them.
I use teardown very infrequently in Java. In Python it was used more often because I was more likely to change global state (in particular, monkey patching modules to get users of those modules under test). In that case I want a teardown that will guaranteed to be called if a test failed.
Integration tests and functional tests (which often use the xunit framework) are more likely to need setup and teardown.
The point to remember is to think about fixtures, not only DRY.
I don't have an issue with test setup and teardown methods per se.
The issue to me is that if you have a test setup and teardown method, it implies that the same test object is being reused for each test. This is a potential error vector, as if you forget to clean up some element of state between tests, your test results can become order-dependent. What we really want is tests that do not share any state.
xUnit.Net gets rid of setup/teardown, because it creates a new object for each test that is run. In essence, the constructor becomes the setup method, and the finalizer becomes the teardown method. There's no (object-level) state held between tests, eliminating this potential error vector.
Most tests that I write have some amount of setup, even if it's just creating the mocks I need and wiring the object being tested up to the mocks. What they don't do is share any state between tests. Teardown is just making sure that I don't share that state.
I haven't had time to read both of what you posted, but I in particular liked this comment:
each test is forced to do the initialization for what it needs to run.
Setup and tear down are convenience methods - they shouldn't attempt to do much more than initialize a class using its default constructor, etc. Common code that three tests need in a five test class shouldn't appear there - each of the three tests should call this code directly. This also keeps tests from stepping on each other's toes and breaking a bunch of tests just because you changed a common initalization routine. The main problem is that this will be called before all tests - not just specific tests. Most tests should be simple, and the more complex ones will need initialization code, but it is easier to see the simplicity of the simple tests when you don't have to trace through a complex initialization in set up and complex destruction in tear down while thinking about what the test is actually supposed to accomplish.
Personally, I've found setup and teardown aren't always evil, and that this line of reasoning is a bit dogmatic. But I have no problem calling them a
code smell for unit tests. I feel their use should be justified, for a few reasons:
Test code is procedural by its nature. In general, setup/teardown do tend to reduce test readability/focus.
Setup methods tend to initialize more than what is needed for any single test. When abused they can become unwieldy. Object Mothers, Test Data Builders, perhaps frameworks like FactoryGirl seem better at initializing test data.
They encourage "context bloat" - the larger the test context becomes, the less maintainable it will be.
To the extent that my setup/teardown doesn't do this, I think their use is warranted. There will always be some duplication in tests. Neal Ford states this as "Tests can be wet but not soaking..." Also, I think their use is more justified when we're not talking about unit tests specifically, but integration tests more broadly.
Working on my own, this has never really been a problem. But I've found it very difficult to maintain test suites in a team setting, and it tends to be because we don't understand each other's code immediately, or don't want to have to step through it to understand it. From a test perspective, I've found allowing some duplication in tests eases this burden.
I'd love to hear how others feel about this, though.
If you need setup and teardown to make your unit tests work, maybe what you really need is mock objects?

Mocks or real classes? [duplicate]

This question already has answers here:
When should I mock?
(4 answers)
Closed 9 years ago.
Classes that use other classes (as members, or as arguments to methods) need instances that behave properly for unit test. If you have these classes available and they introduce no additional dependencies, isn't it better to use the real thing instead of a mock?
I say use real classes whenever you can.
I'm a big believer in expanding the boundaries of "unit" tests as much as possible. At this point they aren't really unit tests in the traditional sense, but rather just an automated regression suite for your application. I still practice TDD and write all my tests first, but my tests are a little bigger than most people's and my green-red-green cycles take a little longer. But now that I've been doing this for a little while I'm completely convinced that unit tests in the traditional sense aren't all they're cracked up to be.
In my experience writing a bunch of tiny unit tests ends up being an impediment to refactoring in the future. If I have a class A that uses B and I unit test it by mocking out B, when I decide to move some functionality from A to B or vice versa all of my tests and mocks have to change. Now if I have tests that verify that the end to end flow through the system works as expected then my tests actually help me to identify places where my refactorings might have caused a change in the external behavior of the system.
The bottom line is that mocks codify the contract of a particular class and often end up actually specifying some of the implementation details too. If you use mocks extensively throughout your test suite your code base ends up with a lot of extra inertia that will resist any future refactoring efforts.
It is fine to use the "real thing" as long as you have absolute control over the object. For example if you have an object that just has properties and accessors you're probably fine. If there is logic in the object you want to use, you could run into problems.
If a unit test for class a uses an instance of class b and an change introduced to b breaks b, then the tests for class a are also broken. This is where you can run into problems where as with a mock object you could always return the correct value. Using "the real thing" Can kind of convolute tests and hide the real problem.
Mocks can have downsides too, I think there is a balance with some mocks and some real objects you will have to find for yourself.
There is one really good reason why you want to use stubs/mocks instead of real classes. I.e. to make your unit test's (pure unit test) class under test isolated from everything else. This property is extremely useful and the benefits for keeping tests isolated are plentiful:
Tests run faster because they don't need to call the real class implementation. If the implementation is to run against file system or relational database then the tests will become sluggish. Slow tests make developers not run unit tests as often. If you're doing Test Driven Development then time hogging tests are together a devastating waste of developers time.
It will be easier to track down problems if the test is isolated to the class under test. In contrast to a system test it will be much more difficult to track down nasty bugs that are not apparently visible in stack traces or what not.
Tests are less fragile on changes done on external classes/interfaces because you're purely testing the class that is under test. Low fragility is also an indication of low coupling, which is a good software engineering.
You're testing against external behaviour of a class rather than the internal implementation which is more useful when deciding code design.
Now if you want to use real class in your test, that's fine but then it is NOT a unit test. You're doing a integration test instead, which is useful for the purpose of validating requirements and overall sanity check. Integration tests are not run as often as unit tests, in practice it is mostly done before committing to favorite code repository, but is equally important.
The only thing you need to have in mind is the following:
Mocks and stubs are for unit tests.
Real classes are for integration/system tests.
Extracted and extended from an answer of mine How do I unit-test inheriting objects?">here:
You should always use real objects where possible.
You should only use mock objects if the real objects do something you dont want to set up (like use sockets, serial ports, get user input, retrieve bulky data etc). Essentially, mock objects are for when the estimated effort to implement and maintain a test using a real object is greater than that to implement and maintain a test using a mock object.
I dont buy into the "dependant test failure" argument. If a test fails because a depended-on class broke, the test did exactly what it should have done. This is not a smell! If a depended-on interface changes, I want to know!
Highly mocked testing environments are very high-maintenance, particularly early in a project when interfaces are in flux. Ive always found it better to start integration testing ASAP.
I always use a mock version of a dependency if the dependency accesses an external system like a database or web service.
If that isn't the case, then it depends on the complexity of the two objects. Testing the object under test with the real dependency is essentially multiplying the two sets of complexities. Mocking out the dependency lets me isolate the object under test. If either object is reasonably simple, then the combined complexity is still workable and I don't need a mock version.
As others have said, defining an interface on the dependency and injecting it into the object under test makes it much easier to mock out.
Personally, I'm undecided about whether it's worth it to use strict mocks and validate every call to the dependency. I usually do, but it's mostly habit.
You may also find these related questions helpful:
What is object mocking and when do I need it?
When should I mock?
How are mocks meant to be used?
And perhaps even, Is it just me, or are interfaces overused?
Use the real thing only if it has been unit tested itself first. If it introduces dependencies that prevent that (circular dependencies or if it requires certain other measures to be in place first) then use a 'mock' class (typically referred to as a "stub" object).
If your 'real things' are simply value objects like JavaBeans then thats fine.
For anything more complex I would worry as mocks generated from mocking frameworks can be given precise expectations about how they will be used e.g. the number of methods called, the precise sequence and the parameters expected each time. Your real objects cannot do this for you so you risk losing depth in your tests.
I've been very leery of mocked objects since I've been bitten by them a number of times. They're great when you want isolated unit tests, but they have a couple of issues. The major issue is that if the Order class needs a a collection of OrderItem objects and you mock them, it's almost impossible to verify that the behavior of of the mocked OrderItem class matches the real-world example (duplicating the methods with appropriate signatures is generally not enough). More than once I've seen systems fail because the mocked classes don't match the real ones and there weren't enough integration tests in place to catch the edge cases.
I generally program in dynamic languages and I prefer merely overriding the specific methods which are problematic. Unfortunately, this is sometimes hard to do in static languages. The downside of this approach is that you're using integration tests rather than unit tests and bugs are sometimes harder to track down. The upside is that you're using the actual code that is written, rather than a mocked version of that code.
If you don't care for verifying expectations on how your UnitUnderTest should interact with the Thing, and interactions with the RealThing have no other side-effects (or you can mock these away) then it is in my opinion perfectly fine to just let your UnitUnderTest use the RealThing.
That the test then covers more of your code base is a bonus.
I generally find it is easy to tell when I should use a ThingMock instead of a RealThing:
When I want to verify expectations in the interaction with the Thing.
When using the RealThing would bring unwanted side-effects.
Or when the RealThing is simply too hard/troublesome to use in a test setting.
If you write your code in terms of interfaces, then unit testing becomes a joy because you can simply inject a fake version of any class into the class you are testing.
For example, if your database server is down for whatever reason, you can still conduct unit testing by writing a fake data access class that contains some cooked data stored in memory in a hash map or something.
It depends on your coding style, what you are doing, your experience and other things.
Given all that, there's nothing stopping you from using both.
I know I use the term unit test way too often. Much of what I do might be better called integration test, but better still is to just think of it as testing.
So I suggest using all the testing techniques where they fit. The overall aim being to test well, take little time doing it and personally have a solid feeling that it's right.
Having said that, depending on how you program, you might want to consider using techniques (like interfaces) that make mocking less intrusive a bit more often. But don't use Interfaces and injection where it's wrong. Also if the mock needs to be fairly complex there is probably less reason to use it. (You can see a lot of good guidance, in the answers here, to what fits when.)
Put another way: No answer works always. Keep your wits about you, observe what works what doesn't and why.