There are different kinds of tests: unit, integration, functional, and acceptance. So if I'm doing proper test-driven development, when do I write each kind of test?
I'm thinking that in typical TDD, the unit tests are the kind of tests that precede the writing of code. The typical workflow I see is:
Write failing unit test
Run test to verify that it fails
Write simplest passing function/method
Run test to verify that it passes
Refactor code
Soooo...where do the integration, functional, and acceptance tests come in? Do you write them after the code? Or do you write them along with the unit test at the very beginning?
Also, as an additional question, I often hear about this "100% code coverage" idea. It's easy to see how this would apply to unit testing--just have one test for every method. But should you aim to have 100% code coverage for each kind of test? For example, should unit tests cover 100% of my code AND functional tests cover 100% of my code (albeit from a more broad perspective)?
While it tends to fit more naturally with lower level tests, TDD is really a mindset that can be applied at any (or all) levels. You could write a failing acceptance test, then write corresponding failing integration tests, break them down into failing unit tests and then "green up" your way back to the original acceptance test as you make each test in the chain pass.
An article that illustrates this : ATDD From the Trenches
Regarding code coverage, in my experience you get most of it from unit tests and/or integration tests, depending on the degree of isolation you like in your testing style. Anyway, I see them as complementary, you shouldn't look for 100% coverage in each test category. Higher-level (system, end to end, acceptance...) tests on the other hand will typically bust configuration/environment problems, which generally doesn't have an impact on code coverage.
I typically first write an external test first that will drive the development of the feature. This approach is part of the London School of TDD.
As highlighted in the above article by Jason Gorman, The London school's definitive text is Growing Object Oriented Software Guided By Tests by Steve Freeman and Nat Pryce.
Related
For a short summary:
Unit tests are the smaller ones that expects something to do it right in dev's view.
Functional tests are those that expects things are right in user's view
If Functional Tests are already satisfied, do we still need to do Unit Tests?
Most likely (let's use web development as context), this is common by using the browser to see if things are right by letting other users/people try the system/application.
Let us put out other tests like for edge cases.
Are there any metrics that you're using to determined if a functional test is "satisfied"?
I feel like it helps to have an objective measurement to create a baseline to compare types of tests, a common one is code coverage.
By identifying code coverage it's easy to compare functional tests, and unit tests, does the functional test cover the same lines of code as the unit test? If so than it is redundant.
The problem is this ignores a ton of other issues:
Are the functional tests prohibitively long to run? Are they difficult to setup, or are they only executed in CI? Then it may make sense to duplicate lines of code in unit tests, which will provide developers with fast feedback.
Is this a POC or an immature project? Functional tests may provide the best bang for the buck since they should be able to assert higher level use cases and be abstracted from implementation details. This is extremely helpful when implementation details are uncertain
Code coverage is misleading, An IO centric library (ie, DB driver) could easily have 100% code coverage, by mocking its dependencies. If we use code coverage to compare functional tests and unit tests in this case, we'd be missing multiple dimensions of testing, as functional tests will exercise the IO dependencies. (IMO unit testing in this case is actually a pretty small value, and gives false confidence in IO heavy code, resulting in integration bugs that are usually uncovered at later cycles in dev, where it's more expensive to fix issues
You touched on edge cases, functional tests will usually illustrate a couple of client flows. Exercising all the edge cases and error handling using functional tests usually tend to be a waste of resources and end up creating huge suites of difficult to maintain, slow tests
I am working on a web application which is constantly enhancing in parallel development environment(develop two requirements in two different environments and merge first code base to second, when first requirement is released to production).
My question is about having both integrating testing and unit testing for app and its maintenance.
Unit testing with mocking makes difficult to maintain tests in parallel development, integration testing(using selenium) in parallel development makes difficult to maintain required data in database(which may be easy than fixing a failed unit test)
I am leaning towards integration testing, as merging code will not break use case, but unit test case may fail by merging code because of expectations.
The app is little old and not properly designed For unit Testing and refactoring code and maintaining unit test cases is becoming hard.Please suggest a better approach for testing.
Unit tests and integration tests both have their place.
Unit tests, as the name indicates, verifies the unit is behaving as expected.
It's true that integration tests cover the same code that is covered by the unit tests. But unit tests help you pin point issues more easily. Instead of investigating the failure to understand which part of the system is responsible for the issue - you have a failing unit test to help you find out.
Another reason to have unit tests is speed. Unit tests should be fast. They should not rely on various system dependencies (you should be using mocks and stubs for that). If you have unit tests with good coverage you get feedback fast regarding the quality of the system - before you start your long test cycle.
Actually you usually employ various level of automated tests:
Smoke tests. This are unit tests that test various part of the system in the most basic scenarios. They are usually employed as part of a gated check-in that don't check in bad code. You need this to be fast.
Regression - unit tests. Usually part of continuous integration. Again you need this to be as fast as possible so that the build will not take too long.
Full regression + integration tests. These are more system tests that take longer to run. These usually run once a day in a nightly build or even less frequently (depending on length)
If you find the cost of maintaining some types of tests too high, it's sensible to consider dropping them. I think integration tests are more important for QA, and if I could only choose one type of testing (unit vs integration) that's what I would go with.
However, first you might also want to consider if there is a way to factor your unit tests to avoid these maintenance issues. When I first started using mocks, it took me a while to find the right balance. In my experience I find it best to avoid mocks (with expectations) as much as much as possible. I do use mocking libraries, though, just mostly for trivial stubbing rather than more complex mocking. I wrote a blog post about this a while back if you are interested.
W/ regard to the unit test issue:
I would suspect that the unit tests you are constantly having to refactor are a bit too low of a level. Generally it is better to have somewhat higher level tests that exercise the lower level code by varying the inputs.
Assuming there is not unnecessary code, the higher level tests should still provide good code coverage (if they can't reach code, why is the code there?).
W/ regard to the functional test issue:
You may want to consider a representative data sample (or several). That way you can have a known input, so you should be get predictable output.
Avoid unit testing altogether. Or start with Unit testing and once you reach integration tests "pass the relay baton" to integration testing. Delete the unit tests previously written or leave some for documentation.
Code coverage cannot be measured with integration testing(as it passes through different systems.)
Eventhough mainiting both unit tests and integration tests are ideal theoretically let us not risk it says the experienced.
Only an advanced unit tester can become integration tester. So only those who have become integration testers can advice against unit testing. We better listen to them because they are passionate about unit testing and are willing to sacrificing the 'fun' of writing unit tests for better 'safety net' for the team.
Integration tests are difficult to implement (seeding DB, infra recreation ,etc) and we simply wont have time to maintain both(unit and intgration tests)
Unit testing can still be good for library(.dll), framework development, complex calculations and also for product development companies. But how many of us work on these? These days for web develoment, everyone can work in end-to-end scenarious easily as there are frameworks already available. For this integration tests are best anyways.
Some helpful links:
"Why You Need to Stop Writing Unit Tests" https://hackernoon.com/why-you-need-to-stop-writing-unit-tests
"Write tests. Not too many. Mostly integration."
https://kentcdodds.com/blog/write-tests
"The No. 1 unit testing best practice: Stop doing it"
https://techbeacon.com/app-dev-testing/no-1-unit-testing-best-practice-stop-doing-it
"Unit test kills"
https://www.linkedin.com/pulse/before-you-feed-unit-test-beast-s-a-n-j-ay-mohan/?trackingId=5eIFIJGSBpnGuXMEz2PnwQ%3D%3D
Our company is in the process of improving the code quality and processes to adopt when delivering a piece of code. My question is concerned to unit testing and I wanted to gather information on the processes you adopt when you are asked to implement a functionality.
Is TDD a form of unit test. From what i understand in TDD, you write your test first (which fails), write your code and then run your test which should pass. It may be that the code will make external method call. But how are we suppose to know about the stubbing required when we are writing our test first?
When you are building your application prior release, what kind of test do you include in the build? Does the build run your integration test or does it run only your unit test?
Apart from TDD, do you write any other kind of test. Sorry if the question are slightly distorted. Your experience on how you undertake development is highly appreciated. Thanks
TDD can be a whole lot more than Unit Testing - so I'd say that Unit Testing is just a part of TDD. The methodology as a whole I think can include creating tests (expressing expectation/requirement of correct behaviour) on the result of any process in the software development. Be that writing code, build scripts, deployment scripts, database scripts, data import/export/transformation... whatever you need to do you should ask yourself, "How can I prove this has worked? Can I automate a test for that?"
As an example: something that is often overlooked because it falls out of scope of Unit Testing but is a very valid test, and one that is important to front-load in the development process is deployment.
If a software development cannot be easily deployed to the production environment without significant effort and change (to the software or environment architecture) it is important to know this up front, rather than a week before it has to go live. Once you have that process nailed, wouldn't it be nice to have a way of testing to make sure that it was correctly deployed?
When you understand that process - why not script and automate it? If you know the requirement is that it must be deployed, why not write a test for that before even doing it?
I've said it before but I'll say it again - the best resource I've found on the subject is Growing Object-Oriented Software, Guided by Tests - which is part of the Kent Beck Signature Series.
TDD is not about testing. TDD uses tests to drive the design of your code. TDD produces tests as a happy side-effect of designing your code by writing the tests first, but it's not about testing: it isn't a testing methodology and the purpose is not to produce tests.
Is test driven development a form of unit testing?
No. It is a design methodology.
From what I understand in TDD, you write your test first (which
fails), write your code and then run your test which should pass.
You're missing a very important step. You write your test first, you write your code until your test passes - and then you refactor. The tests permit you to refactor safely, ensuring that the desired behavior continues to work while you adjust your design. The tests also guide you to testable code, promoting smaller methods, shorter parameter lists, and overall much simpler design than other methodologies lead you to.
Apart from TDD, do you write any other kind of test?
When I do, it's usually a sign that I've failed to do TDD properly (but it certainly happens). We have both unit tests and user acceptance tests; both can be written prior to code, but sometimes our user acceptance tests are written later in the development cycle. They shouldn't be, but sometimes they are.
TDD is about design during the 5 minutes or so of your original Red-Green-Refactor loop. But it's arguably about testing forever after since there is nothing left to design - your TDD tests then become part of a perfect test harness to detect regressions caused by further developments. So yes, I guess you could say test driven development is a form of unit testing :)
But how are we suppose to know about the stubbing required when we are
writing our test first?
TDD often requires a (quick) prior modelling session where you flesh out the big picture classes your SUT will collaborate with.
However you need not go into the details of how these collaborators work. With mocks you basically apply wishful thinking that their implementations will behave correctly when you have TDD'd them at some point later, so for now you can just concentrate on the SUT.
When you are building your application prior release, what kind of
test do you include in the build? Does the build run your integration
test or does it run only your unit test?
When you practice Continuous Integration, your unit tests are supposed to be run each time so you can theoretically take any (non-failed) build and use it as a release build.
However, you may want to run automated or manual integration/acceptance tests as well before releasing your version. GUIs for instance, are usually not easily unit testable so acceptance/integration testing is a good way to track bugs in them.
You have several questions here, ill try to address them in a logical order
Is TDD a form of unit testing?
Id say "yes", in the sense it creates unit tests, even if it isnt the only benefit of using TDD. On the topic stressed by commentators, but not mentioned in your question: TDD not only ensures test coverage and documentiation (good tests are one of the best form of low level code documentation). Using TDD forces you to make certain design decisions, usually improving the overall app design.
Do You write other tests?
Well, I don't write any other unit tests. The point of TDD is the development of the code parallel to the development of the tests. By writing software in a cycle - single test, only enough code to pass it, you're sure that your tests document all the functionality and behaviour you require from your code and you make sure that the code is testable (you have to write it that way doing TDD). There should be no need for additional unit tests
There are other kinds of tests that you should use tho. Integration tests come to mind first, but there are other, like acceptance tests. If you have those automated, you will have it easier on you. Its not you who should be writing acceptance tests - it should be your customer/stakeholder, and You should be helping him on the technical part of writing them. You may be interested in Fitnesse http://fitnesse.org/ - its a tool that helps non-technical people build acceptance tests.
About the stubbing?
Its kind of difficult to discuss this without concrete examples. All i can say right now is - just write the code one test at a time. If you do so, there are chances you wont encounter a situation where you have a complicated class and think about how to stub around its complex dependencies.
What tests should be included in the build?
Id say - all of them, if it is possible!
I want to introduce Unit Testing to some colleagues that have no or little experience with Unit Testing. I'll start with a presentation of about an hour to explain the concept and give lots of examples. I'll follow up with pair programming sessions and code reviews.
What are the key points that should be focussed on at the intrduction?
To keep it really short: Unit testing is about two things
a tool for verifying intentions
a necessary safety net for refactoring
Obviously, it is a lot more than that, but to me that pretty much the sums it up.
Unit tests test small things
Another thing to remember is that unit tests test small things, "units". So if your test runs against a resource like a live server or a database, most people call that a system or integration test. To unit test just the code that talks to a resource like that, people often use mock objects (often called mocks).
Unit tests should run fast and be run often
When unit tests test small things, the tests run fast. That's a good thing. Frequently running unit tests helps you catch problems soon after the occur. The ultimate in frequently running unit tests is having them automated as part of continuous integration.
Unit tests work best when coverage is high
People have different views as to whether 100% unit test coverage is desirable. I'm of the belief that high coverage is good, but that there's a point of diminishing return. As a very rough rule of thumb, I would be happy with a code base that had 85% coverage with good unit tests.
Unit tests aren't a substitute for other types of tests
As important as unit tests are, other types of testing, like integration tests, acceptance tests, and others can also be considered parts of a well-tested system.
Unit testing existing code poses special challenges
If you're looking to add unit tests to existing code, you may want to look at Working Effectively with Legacy Code by Michael Feathers. Code that wasn't designed with testing in mind may have characteristics that make testing difficult and Feathers writes about ways of carefully refactoring code to make it easier to test. And when you're familiar with certain patterns that make testing code difficult, you and your team can write code that tries to avoid/minimize those patterns.
You might get some inspiration here too https://stackoverflow.com/questions/581589/introducing-unit-testing-to-a-wary-team/581610#581610
Remember to point out that Unit Testing is not a silver bullet and shouldn't replace other forms of traditional testing (Functional Tests etc) but should be used in conjunction.
Unit testing works better in some areas than others, so the only way to have truly comprehensive testing is to combine it with other forms.
This seems to be one of the biggest criticisms I see of Unit Testing as a lot of people don't seem to 'get' that it shouldn't be replacing other forms of testing in totality.
Main points:
unit tests help both design (by expressing intent) and regression test (by never going away) code;
unit tests are for lazy programmers who don't want to debug their code again;
tests have no business of influencing or affecting business logic and functionality, but they do test it fully;
unit tests demand the same qualities as regular code: theory and strategy, organization, patterns, smells, refactoring;
Unit tests should be FAIR.
F Fast
A Can be easily Automated
I Can be run Independently
R Repeatable
I just had a conversation with my lead developer who disagreed that unit tests are all that necessary or important. In his view, functional tests with a high enough code coverage should be enough since any inner refactorings (interface changes, etc.) will not lead to the tests being needed to be rewritten or looked over again.
I tried explaining but didn't get very far, and thought you guys could do better. ;-) So...
What are some good reasons to unit test code that functional tests don't offer? What dangers are there if all you have are functional tests?
Edit #1 Thanks for all the great answers. I wanted to add that by functional tests I don't mean only tests on the entire product, but rather also tests on modules within the product, just not on the low level of a unit test with mocking if necessary, etc. Note also that our functional tests are automatic, and are continuously running, but they just take longer than unit tests (which is one of the big advantages of unit tests).
I like the brick vs. house example. I guess what my lead developer is saying is testing the walls of the house is enough, you don't need to test the individual bricks... :-)
Off the top of my head
Unit tests are repeatable without effort. Write once, run thousands of times, no human effort required, and much faster feedback than you get from a functional test
Unit tests test small units, so immediately point to the correct "sector" in which the error occurs. Functional tests point out errors, but they can be caused by plenty of modules, even in co-operation.
I'd hardly call an interface change "an inner refactoring". Interface changes tend to break a lot of code, and (in my opinion) force a new test loop rather than none.
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are checking that you've manufactured your bricks correctly
functional tests are checking that the house meets the customer's needs.
They're different things, but the latter will be much easier, if the former has been carried out.
It can be a lot more difficult to find the source of problems if a functional test fails, because you're effectively testing the entire codebase every time. By contrast, unit tests compartmentalize the potential problem areas. If all the other unit tests succeed but this one, you have an assurance that the problem is in the code you're testing and not elsewhere.
Bugs should be caught as soon as possible in the development cycle - having bugs move from design to code, or code to test, or (hopefully not) test to production increases the cost and time required to fix it.
Our shop enforces unit testing for that reason alone (I'm sure there are other reasons but that's enough for us).
If you use a pure Extreme Programing / Agile Development methodology the Unit tests are always required as they are the requirements for development.
In pure XP/Agile one makes all requirements based on the tests which are going to be performed to the application
Functional tests - Generate functional requirements.
Unit tests - Generate functions or object requirements.
Other than that Unit testing can be used to keep a persistent track of function requirements.
i.e. If you need to change the working way of a function but the input fields and output keep untouched. Then unit testing is the best way to keep tracking of possible problems as you only need to run the tests.
In TDD/BDD, unit tests are necessary to write the program. The process goes
failing test -> code -> passing test -> refactor -> repeat
The article linked also mentions the benefits of TDD/BDD. In summary:
Comes very close to eliminating the use of a debugger (I only use it in tests now and very rarely for those)
Code can't stay messy for longer than a few minutes
Documentation examples for an API built-in
Forces loose coupling
The link also has a (silly) walk-through example of TDD/BDD, but it's in PowerPoint (ew), so here's an html version.
Assume for a second that you already have a thorough set of functional tests that check every possible use case available and you are considering adding unit tests. Since the functional tests will catch all possible bugs, the unit tests will not help catch bugs. There are however, some tradeoffs to using functional tests exclusively compared to a combination of unit tests, integration tests, and functional tests.
Unit tests run faster. If you've ever worked on a big project where the test suite takes hours to run, you can understand why fast tests are important.
In my experience, practically speaking, functional tests are more likely to be flaky. For example, sometimes the headless capybara-webkit browser just can't reach your test server for some reason, but you re-run it and it works fine.
Unit tests are easier to debug. Assuming that the unit test has caught a bug, it's easier and faster to pinpoint exactly where the problem is.
On the other hand, assuming you decide to just keep your functional tests and not add any unit tests
If you ever need to re-architect the entire system, you may not have to rewrite any tests. If you had unit tests, a lot of them will probably be deleted or rewritten.
If you ever need to re-architect the entire system, you won't have to worry about regressions. If you had relied on unit tests to cover corner cases, but you were forced to delete or rewrite those unit tests, your new unit tests are more likely to have mistakes in them than the old unit tests.
Once you already have the functional test environment set up and you have gotten over the learning curve, writing additional functional tests is often easier to write and often easier to write correctly than a combination of unit tests, integration tests, and functional tests.