Starting to write unit tests (and other kinds), I realize that some tests (acceptance tests and the like) can quickly grow into quite complex software, and felt that maybe some unit tests (or at least shorter tests than the original one) to verify the longer test might be in place.
"Testing tests" might sound silly, but I wanted to know if there are people practicing this and if there is any known best practice for "Testing tests"?
The tests test the production code, and the production code should test the tests. ;-) That is to say - What you mainly want to test about a test is that it properly tests the code it's intended to test. The principal way to do this is to make the code wrong and see that the tests catch it.
There are tools for doing this automatically and measuring test coverage by seeing what mutations you can do to the code that are not caught by a test. See Mutation Testing.
If your tests are complex, you should be refactoring them and extracting some of the complexity out into test helpers. With enough of these, you might end up building something resembling a test framework, and that would deserve tests. Indeed, JUnit, RSpec, Cucumber, FitNesse, etc. are all test frameworks and have suites of tests to test the framework.
You should try to keep the tests themselves simple enough that you trust they work. But verifying by mutation testing might be worthwhile, especially as it also gives you a measure of your coverage.
Big tests have little tests upon their backs to verfy'em.
And Little tests have lesser tests, and so ad infinitum.
Loosely based on Jonathan Swift's little poem
Don't test tests. Simply assume they're correct and move on.
If the lower tests fail, of course the higher tests will fail. That's understandable. (I would advise making a note that "This test relies on test X, Y and Z passing." if it isn't intuitive by a quick overview of the higher test.)But you don't need a specific test to make sure another test works before you move on - the test suite should be running ALL tests.
I think if you have to test your tests you're doing something wrong. In my opinion tests should be really easy and simple. However, what you can do is measure how good your tests are. Maybe you want to know how much code coverage your tests have..
but that's not testing the tests as you thought, is it?
In any case, it's probably a waste of time.. :)
You've touched on a few things here. Let me try to cover them all ...
If you've written an in-house test framework, then it should have some tests around it. If you're not using the off-the-shelf frameworks and tools, then you need to know that the framework itself is working the way it is expected to. Tests will provide that.
Tests are a double-check of the code. It is better to test the code again from a different angle than to write a test of the test. I would spend time expanding my test data set or building a fuzz tester or some other such exercise to expand test coverage than spend time writing code to test the tests themselves.
You should formalize the testing process. Having a test plan (even if it is just a list of scenarios with repro steps in a wiki) is a huge step up from where most people are in their testing efforts. You should also have some form of code review for tests before/when they are checked into the code repository. This will help catch little errors like, "Oh ... you didn't realize that when x is true that it always returns five?"
When a test fails, one of the first things that a tester should check is if the test is correct. (Often because that's the first thing the developer is going to insist, that the test is wrong and their code is right.) Swallow your pride and verify the test is 100% correct, then try to figure out if there is a bug in the shipping code. You are using bug tracking software, right? When a test is broken, a bug is filed, isn't it? Sharing these test bugs helps everyone learn how to be better testers.
And finally, when all the tests are passing is when the tester should be the most vigilant! Double-check everything again and use this time to expand the test coverage in new ways, like fuzz testing, model-based testing, etc.
Related
As I am writing tests, some of them have a lot of logic in them. Most of this logic could easily be unit tested, which would provide a higher level of trust in the tests.
I can see a way to do this, which would be to create a class TestHelpers, to put in /classes, and write tests for TestHelpers along with the regular tests.
I could not find any opinion on such a practice on the web, probably because the keywords to the problem are tricky ("tests for tests").
I am wondering whether this sounds like good practice, whether people have already done this, whether there is any advice on that, whether this points to bad design, or something of the sort.
I am running into this while doing characterization tests. I know there are some frameworks for it, but I am writing it on my own, because it's not that complicated, and it gives me more clarity. Also, I can imagine that one can easily run into the same issue with unit tests.
To give an example, at some point I am testing a function that connects to Twitter's API service and retrieves some data. In order to test that the data is correct, I need to test whether it's a json encoded string, whether the structure matches twitter's data structure, whether each value has the correct type, etc. The function that does all these checks with the retrieved data would typically be interesting to test on its own.
Any idea or opinion on this practice ?
One of the aphorisms about TDD is that "the tests test the code, and the code tests the tests." That is, because of the red-green-refactor cycle, you see the code fail the test and then after making it work, you see it pass the test - and that alone is enough to give you pretty good confidence that the test (and all the test utility code it calls) works correctly. For characterization tests, you don't have this red-green-refactor cycle, so it may be of value for you to write tests for your test utility methods.
I think it's too much to test tests themselves. alfasin right, who would test tests for tests then? And it is no coincidence that you can't find much info on this topic. That's because it's just not a common practice. Usually, well-written tests should cover arrangement logic in tests themselves. But I understand your aspiration - how to be sure that test is "well-written"? The most dangerous thing here is to have passing test, that should normally fail(but passes due to bug in it). Having such a test is even worse than not having test at all. But to be honest, I did not have much such cases in practice. My advice is just to focus on writing good tests that cover all execution paths of your logic and I think you'll be fine :)
While it sounds perverse, I have on occasion written automated tests for some of my testing infrastructure, if it gets fairly complex. The tests that test this testing infrastructure then tend to be simple, so the "what about testing the tests for the test?" question becomes moot, in my experience.
Note that this is mainly occurring in a library designed explicitly to aid testing for other people (people writing Qt apps, in this case), though I have done it for some stand-alone apps before: for example, when writing tests for Kate's Vim mode's auto-completion integration, the fake auto-completer test-helper code used for mimicking the auto-completion for a variety of configurations got complex enough that I actually started developing it test-first.
And it's probably worth mentioning that e.g. Google Mock has hundreds of tests written for it :)
Our company is in the process of improving the code quality and processes to adopt when delivering a piece of code. My question is concerned to unit testing and I wanted to gather information on the processes you adopt when you are asked to implement a functionality.
Is TDD a form of unit test. From what i understand in TDD, you write your test first (which fails), write your code and then run your test which should pass. It may be that the code will make external method call. But how are we suppose to know about the stubbing required when we are writing our test first?
When you are building your application prior release, what kind of test do you include in the build? Does the build run your integration test or does it run only your unit test?
Apart from TDD, do you write any other kind of test. Sorry if the question are slightly distorted. Your experience on how you undertake development is highly appreciated. Thanks
TDD can be a whole lot more than Unit Testing - so I'd say that Unit Testing is just a part of TDD. The methodology as a whole I think can include creating tests (expressing expectation/requirement of correct behaviour) on the result of any process in the software development. Be that writing code, build scripts, deployment scripts, database scripts, data import/export/transformation... whatever you need to do you should ask yourself, "How can I prove this has worked? Can I automate a test for that?"
As an example: something that is often overlooked because it falls out of scope of Unit Testing but is a very valid test, and one that is important to front-load in the development process is deployment.
If a software development cannot be easily deployed to the production environment without significant effort and change (to the software or environment architecture) it is important to know this up front, rather than a week before it has to go live. Once you have that process nailed, wouldn't it be nice to have a way of testing to make sure that it was correctly deployed?
When you understand that process - why not script and automate it? If you know the requirement is that it must be deployed, why not write a test for that before even doing it?
I've said it before but I'll say it again - the best resource I've found on the subject is Growing Object-Oriented Software, Guided by Tests - which is part of the Kent Beck Signature Series.
TDD is not about testing. TDD uses tests to drive the design of your code. TDD produces tests as a happy side-effect of designing your code by writing the tests first, but it's not about testing: it isn't a testing methodology and the purpose is not to produce tests.
Is test driven development a form of unit testing?
No. It is a design methodology.
From what I understand in TDD, you write your test first (which
fails), write your code and then run your test which should pass.
You're missing a very important step. You write your test first, you write your code until your test passes - and then you refactor. The tests permit you to refactor safely, ensuring that the desired behavior continues to work while you adjust your design. The tests also guide you to testable code, promoting smaller methods, shorter parameter lists, and overall much simpler design than other methodologies lead you to.
Apart from TDD, do you write any other kind of test?
When I do, it's usually a sign that I've failed to do TDD properly (but it certainly happens). We have both unit tests and user acceptance tests; both can be written prior to code, but sometimes our user acceptance tests are written later in the development cycle. They shouldn't be, but sometimes they are.
TDD is about design during the 5 minutes or so of your original Red-Green-Refactor loop. But it's arguably about testing forever after since there is nothing left to design - your TDD tests then become part of a perfect test harness to detect regressions caused by further developments. So yes, I guess you could say test driven development is a form of unit testing :)
But how are we suppose to know about the stubbing required when we are
writing our test first?
TDD often requires a (quick) prior modelling session where you flesh out the big picture classes your SUT will collaborate with.
However you need not go into the details of how these collaborators work. With mocks you basically apply wishful thinking that their implementations will behave correctly when you have TDD'd them at some point later, so for now you can just concentrate on the SUT.
When you are building your application prior release, what kind of
test do you include in the build? Does the build run your integration
test or does it run only your unit test?
When you practice Continuous Integration, your unit tests are supposed to be run each time so you can theoretically take any (non-failed) build and use it as a release build.
However, you may want to run automated or manual integration/acceptance tests as well before releasing your version. GUIs for instance, are usually not easily unit testable so acceptance/integration testing is a good way to track bugs in them.
You have several questions here, ill try to address them in a logical order
Is TDD a form of unit testing?
Id say "yes", in the sense it creates unit tests, even if it isnt the only benefit of using TDD. On the topic stressed by commentators, but not mentioned in your question: TDD not only ensures test coverage and documentiation (good tests are one of the best form of low level code documentation). Using TDD forces you to make certain design decisions, usually improving the overall app design.
Do You write other tests?
Well, I don't write any other unit tests. The point of TDD is the development of the code parallel to the development of the tests. By writing software in a cycle - single test, only enough code to pass it, you're sure that your tests document all the functionality and behaviour you require from your code and you make sure that the code is testable (you have to write it that way doing TDD). There should be no need for additional unit tests
There are other kinds of tests that you should use tho. Integration tests come to mind first, but there are other, like acceptance tests. If you have those automated, you will have it easier on you. Its not you who should be writing acceptance tests - it should be your customer/stakeholder, and You should be helping him on the technical part of writing them. You may be interested in Fitnesse http://fitnesse.org/ - its a tool that helps non-technical people build acceptance tests.
About the stubbing?
Its kind of difficult to discuss this without concrete examples. All i can say right now is - just write the code one test at a time. If you do so, there are chances you wont encounter a situation where you have a complicated class and think about how to stub around its complex dependencies.
What tests should be included in the build?
Id say - all of them, if it is possible!
I've joined a new team, and I've had a problem understanding how they are doing unit tests. When I asked where the unit tests are written, they explained they don't do their unit tests that way.
They explained that what they're calling unit tests is when they actually check the code they wrote locally, and that all of the points are being connected. To me, this is integration testing and just testing your code locally.
I was under the impression that unit tests are code written to verify behavior in a small section of a code. For example, you may write a unit test to make sure it returns the right value, and make the appropriate calls to the database. use a framework like NUnit or MbUnit to help you out in your assertions.
Unit testing to me is supposed to be fast and quick. To me, you want these so you can automate it, and have a huge suite of tests for your application to make sure that it behaves AS YOU EXPECT.
Can someone provide clarification in my or their misunderstandings?
I have worked places that did testing that way and called it unit testing. It reminded me of a quote attributed to Abe Lincoln:
Lincoln: How many legs does a dog have?
Other Guy: 4.
Lincoln: What if we called the tail a leg?
Other Guy: Well, then it would have 5.
Lincoln: No, the answer is still 4. Calling a tail a leg doesn't make it so.
They explained that what they're calling unit tests is when they
actually check the code they wrote locally, and that all of the points
are being connected.
That is not a unit test. That is a code review. Code reviews are good, but without actual unit tests things will break.
Unit tests involve writing code. Specifically, a unit test operate on one unit, which is just a class or component of your software.
If a class under test depends on another class, and you test both classes together, you have an integration test. Integration tests are good. Depending on the language/framework you might use the same testing framework (e.g. junit for java) for both unit and integration tests. If you have a dependency but mock or stub that dependency, then you have a pure unit test.
Unit testing to me is supposed to be fast and quick. To me, you want
these so you can automate it, and have a huge suite of tests for your
application to make sure that it behaves AS YOU EXPECT.
That is essentially correct. How 'fast and quick' developing unit tests is depends on the complexity of what is being tested and the skill of the developer writing the test. You definitely want to build up a suite of tests over time, so you know when something breaks as a codebase becomes more complex. That is how testing makes your codebase more maintainable, by telling you what ceases to function as you make changes.
Your team-mates are not doing unit testing. They are doing "fly by the seat of your pants" development.
Your assumptions are correct.
Doing a project without unit-tests (as they do, don't be fooled) might seem nice for the first few weeks: less code to write, less architecture to think about, less problems to worry about. And you can see the code is working correctly, right?
But as soon as someone (someone else, or even the original coder) comes back to an existing piece of code to modify it, add feature, or simply understand how it worked and what it exactly did, things will become a lot more problematic. And before you realize it, you'll spend your nights browsing through log files and debugging what seemed like a small feature just because it needs to integrate with other code that nobody knows exactly how it works. ANd you'll hate your job.
If it's not worth testing it (with actual unit-tests), then it's not worth writing the code in the first place. Everyone who tried coding without and with unit tests know that. Please, please, make them change their mind. Every time a piece of untested code is checked in somewhere, a puppy dies horribly.
Also, I should say, it's a lot (A LOT) harder to add tests later to a project that was done without testing in mind, than to build the test and production code side-to-side from the very start. Testing not only help you make sure your code works fine, it improves your code quality by forcing you to make good decisions (i.e. coding on interfaces, loose coupling, inversion of control, etc.)
"Unit testing" != "unit tests".
Writing unit tests is one specific method of performing unit testing. It is a very good one, and if your unit tests are written well, it can give you good value over a long time. But what they're doing is indeed unit testing. It's just the kind of unit testing that doesn't help you at all the next time you need to carve on the same code. And that's wasteful.
To add my two cents, yes, that is indeed not Unit testing. IMHO, the main features of unit tests are that it should be fast, automated and isolated. You can using a mocking framework such as RhinoMocks to isolate external dependencies.
Unit tests also have to be very simple and short. Ideally no more than a screen length. It is also one of the few places in software engineering where copy and pasting code might be a better solution than creating highly reusable and highly abstract functions. The reason simplicity is given such a high priority is to avoid the "Who watches the Watchers" problem. You really don't want to be in a situation where you have complex bugs in your unit tests, because they themselves aren't being tested. Here you are relying on the extreme simplicity and tiny size of the tests to avoid bugs.
The names of the unit tests also should be very descriptive, again following the simplicity and self documenting paradigm. I should be able to read the name of the test method and know exactly what it is doing. A quick glance at the code should show me exactly what functionality is being tested and if any external dependencies are being mocked.
The descriptive test names also make you think about the application as a whole. If I look at the entire test run, ideally just by looking at the names of all the tests that were run, I should have a fairly good idea of what the application does.
I can often identify plenty of areas which are nicely encapsulated and easily unit tested, but I also find that a lot of code where unit testing doesn't really seem to work as well - typically data access and user interface. No matter what unit testing "techniques" I try I tend to find that in these places it's not only a lot of effort to create functioning unit tests, but these tests tend to be very fragile and don't really test very much.
At what point do you stop and decide that the benefits of unit testing isn't worth the cost?
When you can provide better value by doing something else.
I tend to test only the model and data persistence. testing the model is mandatory. UI (desktop application, webapps, command-line interface, etc) is hard to test, so I write test for it only in rare circustances.
Usually I only test the model and the controller. Unit test are hard to apply for UI, usually i prefer manually test the app.
If creating these test cost more time than manually test everytime you might have a regression, then the test is useless (easy to say, but not to evaluate...).
If you need to cut out testing, cut out integration or end-to-end testing. Misko Hevery at Google explains it really well here.
"Unit Testing gives you more bang for
your buck"
is the best quote to come out of his article.
Other than that, when you have decent code coverage and are handling a few edge cases of your code then its a good time to stop unit testing.
If you have the access to some live data you can use it to unit test. also u can use data generators and random data. Unit tests only give you some level of confidence that it wont create problems in the future. If you are confident with your testing you can discontinue unit tests
I like the title of the question. Apart from that I think it is a dupe of
Is there such a thing as excessive unit testing?
I would say when you've gained an acceptable level of confidence. Also, like for my project at work, we are under such tight time requirements that I really have to only test certain parts of code (not all of it) just to give me a good confidence level.
As far as data access testing goes have you tried mock tests to simulate responses.
The basic rule of thumb i'd follow is if the effort to build the unit test is more than the effort to repeatedly manually test the feature by human work.
If you look at the Test projects in the Visual Studio Team edition for Testers, there is such an item called a "Manual Test" which is essentially an instruction document to tell a human how to carry out the test and manually pass it. Certain things, like you mention UI testing, or code to workaround obscurely odd or buggy hardware behaviour in the underlying framework or OS or driver, are better verified by human eyes.
If you are using TDD, then you stop unit testing when all the tests in the test list have succeeded.
Otherwise, you stop unit testing when the cost of finding more bugs through unit testing exceeds the cost of finding them through your QA process. and when you've reached an acceptable level of code coverage through the combination of all tests.
When there isn't time in the project plan for it and time is being spent finding ways of testing rather than working towards the goal of the project.
I just had a conversation with my lead developer who disagreed that unit tests are all that necessary or important. In his view, functional tests with a high enough code coverage should be enough since any inner refactorings (interface changes, etc.) will not lead to the tests being needed to be rewritten or looked over again.
I tried explaining but didn't get very far, and thought you guys could do better. ;-) So...
What are some good reasons to unit test code that functional tests don't offer? What dangers are there if all you have are functional tests?
Edit #1 Thanks for all the great answers. I wanted to add that by functional tests I don't mean only tests on the entire product, but rather also tests on modules within the product, just not on the low level of a unit test with mocking if necessary, etc. Note also that our functional tests are automatic, and are continuously running, but they just take longer than unit tests (which is one of the big advantages of unit tests).
I like the brick vs. house example. I guess what my lead developer is saying is testing the walls of the house is enough, you don't need to test the individual bricks... :-)
Off the top of my head
Unit tests are repeatable without effort. Write once, run thousands of times, no human effort required, and much faster feedback than you get from a functional test
Unit tests test small units, so immediately point to the correct "sector" in which the error occurs. Functional tests point out errors, but they can be caused by plenty of modules, even in co-operation.
I'd hardly call an interface change "an inner refactoring". Interface changes tend to break a lot of code, and (in my opinion) force a new test loop rather than none.
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are checking that you've manufactured your bricks correctly
functional tests are checking that the house meets the customer's needs.
They're different things, but the latter will be much easier, if the former has been carried out.
It can be a lot more difficult to find the source of problems if a functional test fails, because you're effectively testing the entire codebase every time. By contrast, unit tests compartmentalize the potential problem areas. If all the other unit tests succeed but this one, you have an assurance that the problem is in the code you're testing and not elsewhere.
Bugs should be caught as soon as possible in the development cycle - having bugs move from design to code, or code to test, or (hopefully not) test to production increases the cost and time required to fix it.
Our shop enforces unit testing for that reason alone (I'm sure there are other reasons but that's enough for us).
If you use a pure Extreme Programing / Agile Development methodology the Unit tests are always required as they are the requirements for development.
In pure XP/Agile one makes all requirements based on the tests which are going to be performed to the application
Functional tests - Generate functional requirements.
Unit tests - Generate functions or object requirements.
Other than that Unit testing can be used to keep a persistent track of function requirements.
i.e. If you need to change the working way of a function but the input fields and output keep untouched. Then unit testing is the best way to keep tracking of possible problems as you only need to run the tests.
In TDD/BDD, unit tests are necessary to write the program. The process goes
failing test -> code -> passing test -> refactor -> repeat
The article linked also mentions the benefits of TDD/BDD. In summary:
Comes very close to eliminating the use of a debugger (I only use it in tests now and very rarely for those)
Code can't stay messy for longer than a few minutes
Documentation examples for an API built-in
Forces loose coupling
The link also has a (silly) walk-through example of TDD/BDD, but it's in PowerPoint (ew), so here's an html version.
Assume for a second that you already have a thorough set of functional tests that check every possible use case available and you are considering adding unit tests. Since the functional tests will catch all possible bugs, the unit tests will not help catch bugs. There are however, some tradeoffs to using functional tests exclusively compared to a combination of unit tests, integration tests, and functional tests.
Unit tests run faster. If you've ever worked on a big project where the test suite takes hours to run, you can understand why fast tests are important.
In my experience, practically speaking, functional tests are more likely to be flaky. For example, sometimes the headless capybara-webkit browser just can't reach your test server for some reason, but you re-run it and it works fine.
Unit tests are easier to debug. Assuming that the unit test has caught a bug, it's easier and faster to pinpoint exactly where the problem is.
On the other hand, assuming you decide to just keep your functional tests and not add any unit tests
If you ever need to re-architect the entire system, you may not have to rewrite any tests. If you had unit tests, a lot of them will probably be deleted or rewritten.
If you ever need to re-architect the entire system, you won't have to worry about regressions. If you had relied on unit tests to cover corner cases, but you were forced to delete or rewrite those unit tests, your new unit tests are more likely to have mistakes in them than the old unit tests.
Once you already have the functional test environment set up and you have gotten over the learning curve, writing additional functional tests is often easier to write and often easier to write correctly than a combination of unit tests, integration tests, and functional tests.