Most of the discussion on this site is very positive about unit testing. I'm a fan of unit testing myself. However, I've found extensive unit testing brings its own challenges. For example, unit tests are often closely coupled to the code they test, which can make API changes increasingly costly as the volume of tests grows.
Have you found real-world situations where unit tests have been detrimental to code quality or time to delivery? How have you dealt with these situations? Are there any 'best practices' which can be applied to the design and implementation of unit tests?
There is a somewhat related question here: Why didn't unit testing work out for your project?
With extensive unit testing you will start to find that refactoring operations are more expensive for exactly the reasons you said.
IMHO this is a good thing. Expensive and big changes to an API should have a bigger cost relative to small and cheap changes. Refactoring is not a free operation and it's important to understand the impact to both yourself and consumers of your API. Unit Tests are great ruler for measuring how expensive an API change will be to consume.
Part of this problem though is relieved by tooling. Most IDEs directly or indirectly (via plugins) support refactoring operations in their code base. Using these operations to change your unit tests will relieve a bit of the pain.
Are there any 'best practices' which
can be applied to the design and
implementation of unit tests?
Make sure your unit tests haven't become integration tests. For example if you have unit tests for a class Foo, then ideally the tests can only break if
there was a change in Foo
or there was a change in the interfaces used by Foo
or there was a change in the domain model (typically you'll have some classes like "Customer" which are central to the problem space, have no room for abstraction and are therefore not hidden behind an interface)
If your tests are failing because of any other changes, then they have become integration tests and you'll get in trouble as the system grows bigger. Unit tests should have no such scalability issues because they test an isolated unit of code.
One of the projects I worked on was heavily unit-tested; we had over 1000 unit tests for 20 or so classes. There was slightly more test code than production code. The unit tests caught innumerable errors introduced during refactoring operations; they definitely made it easy and safe to make changes, extend functionality etc. The released code had a very low bug rate.
To encourage ourselves to write the unit tests, we specifically chose to keep them 'quick and dirty' - we would bash out a test as we produced the project code, and the tests were boring and 'not real code', so as soon as we wrote one that exercised the functionality of the production code, we were done, and moved on. The only criteria for the test code was that it fully exercised the API of the production code.
What we learnt the hard way is that this approach does not scale. As the code evolved, we saw a need to change the communication pattern between our objects, and suddenly I had 600 failing unit tests! Fixing this took me several days. This level of test breakage happened two or three times with further major architecture refactorings. In each case I don't believe we could reasonably have foreseen the code evolution that was required beforehand.
The moral of the story for me was this: unit-testing code needs to be just as clean as production code. You simply can't get away with cuttting and pasting in unit tests. You need to apply sensible refactoring, and decouple your tests from the production code where possible by using proxy objects.
Of course all of this adds some complexity and cost to your unit tests (and can introduce bugs to your tests!), so it's a fine balance. But I do believe that the concept of 'unit tests', taken in isolation, is not the clear and unambiguous win it's often made out to be. My experience is that unit tests, like everything else in programming, require care, and are not a methodology that can be applied blindly. It's therefore surprising to me that I've not seen more discussion of this topic on forums like this one and in the literature.
Mostly in cases where the system was developed without unit testing in mind, it was an afterthought and not a design tool. When you develop with automated tests the chances of breaking your API diminishes.
An excess of false positives can slow development down, so it's important to test for what you actually want to remain invariant. This usually means writing unit tests in advance for requirements, then following up with more detailed unit tests to detect unexpected shifts in output.
I think you're looking at fixing a symptom, rather than recognizing the whole of the problem. The root problem is that a true API is a published interface*, and it should be subject to the same bounds that you would place on any programming contract: no changes! You can add to an API, and call it API v2, but you can't go back and change API v1.0, otherwise you have indeed broken backward compatibility, which is almost always a bad thing for an API to do.
(* I don't mean to call out any specific interfacing technology or language, interface can mean anything from the class declarations on up.)
I would suggest that a Test Driven Development approach would help prevent many of these kinds of problems in the first place. With TDD you would be "feeling" the awkwardness of the interfaces while you were writing the tests, and you would be compelled to fix those interfaces earlier in the process rather than waiting until after you've written a thousand tests.
One of the primary benefits of Test Driven Development is that it gives you instant feedback on the programmatic use of your class/interface. The act of writing a test is a test of your design, while the act of running the test is the test of your behavior. If it's difficult or awkward to write a test for a particular method, then that method is likely to be used incorrectly, meaning it's a weak design and it should be refactored quickly.
Yes there are situations where unit testing can be detrimental to code quality and delivery time. If you create too many unit test your code will become mangled with interfaces and your code quality as a whole will suffer. Abstraction is great but you can have too much of it.
If your writing unit tests for a prototype or a system that has a high chance of having major changes your unit test will have an effect on time to delivery. In these cases it's often better to write acceptance test which test closer to end to end.
If you're sure your code won't be reused, won't need to be mantained, your project is simple and very short term; then you shouldn't need unit tests.
Unit tests are useful to facilitate changes and maintenance. They really add a little in time to delivery, but it is paid in the medium / long term. If there is no medium / long term, it may be unnecessary, being the manual tests enough.
But all of this is very unlikely, though. So they are still a trend :)
Also, sometimes might be a necessary business decision to invest less time in testing, in order to have a faster urgent delivery (which will need to be paid with interest later)
Slow unit tests can often be detrimental to development. This usually happens when unit tests become integration tests that need to hit web services or the database. If your suite of unit tests takes over an hour to run, often times you'll find yourself and your team essentially paralyzed for that hour waiting to see if the unit tests pass or not (since you don't want to keep building upon a broken foundation).
With that being said, I think the benefits far outweigh the drawbacks in all but the most contrived cases.
Related
During a proposal about implementing unit tests for our projects, my manager argued that it could be more expensive to create unit tests since you will be maintaining two sets of code. His argument is whenever there is a change in requirement/functionality, the unit tests can be obsolete, hence the developers should update the tests in order to pass the automated build. He said that it can be inconvenient for the developers since their coding time might be reduced and spent fixing the tests.
The reason I wanted to implement unit tests is to minimize bug occurence especially the critical ones before the codes are turned to validation team for functional test. I also believe that the cost of creating unit tests can be recovered by having better quality systems.
Right now, his challenge to me is to create easy to maintain tests that would be easy to modify or if possible tests that will not break easily. I am using xUnit testing frameworks like JUnit and mocking frameworks like mockito and powermock to help in testing.
I'm looking for tips and techniques on how to write easy to maintain tests or how to avoid brittle test. Are there other tools which can come handy in creating such tests. I'm writing codes in Java and C++. Thanks.
I think you're facing a difference in culture - your manager fears the potential time sink that testing provides. It is a known fact that a TDD/BDD process is more expensive up-front - but as time goes on you start to reap the rewards as "changing just this one isolated thing..." - no longer throws up painful/embarrassing/costly bugs.
My suggestion is that you do some research and put together a document that tries to sell the process to your manager, putting forwards a business case based on what has happened in your business already that could/would have been solved with a solid test suite.
There is one book that goes into TDD better than any website, article etc I've ever seen. I'd highly recommend it as reading for anyone wanting to practise TDD/BDD/OOP:
http://www.growing-object-oriented-software.com/ (I don't earn any money from linking this! - but it is a superb addition to my desk!).
From my experience, it is almost impossible to truly convince a "Unit Testing Skeptic".
My advice: Start adding tests and increase coverage on a selection of the most important and frequently changing parts of your product. Do this on your own time, and possibly with an "accomplice", and time your work.
After that, show the skeptics examples of regression bugs caught by these tests, and how your time spent was actually worth it. If you succeed in doing so, management will see value in devoting resources for unit tests.
As per the technical challenge, I agree with other answers here that practice makes perfect, but there are some guidelines that could help you:
Test only one thing per test. If you have 100 tests that assert the same condition, when this condition changes, you'll have to update 100 tests.
If your "Class Under Test" is huge, with a ton of logic in it, it will be very hard to test it. Try to refactor your classes into small, coherent units of logic. When one of these units will change, the number of broken tests will be relatively small.
Test your public interface, not implementation details. This will allow developers to fix bugs, improve performance, and refactor, with minimal effect on tests.
There are many more guidelines on the first page of Google's "unit testing guidelines", and you can read The Art Of Unit Testing for an extensive coverage on writing good unit tests.
Good luck!
Whether you write unit tests or not should not concern your management. What usually matters is that functionality is implemented fast and with no bugs. How you achieve it is up to you, developers; unit testing is just one method. It works best for loosely coupled code where you can easily mock out dependencies.
But writing good unit tests is not just about decision to write them, or tools you use, it is mostly about experience which you can gain only by practicing. There are no simple recipes that will let you write good unit tests, as there are no for code. If you just force people with no experience to write unit tests, surely productivity will slump, and there will be no apparent benefits. Ultimately, you should be writing unit tests because you believe it helps you, not because someone else thinks it should help you.
The answer is that the unit tests should test implementation and not functionality. So if the developers refactor the code, nothing should change with the unit tests because they aren't testing the internals, just the results.
Of course, if you change the interface or the behavior drastically, of course the tests will change, but then you will want to be testing the new code ANYWAY, so you'd still be writing tests.
Long story short, there is a lot of research out there that a good test suite saves time in the long run by a huge margin.
Having unified test-datafactories may reduce maintanance cost if system-under-test changes.
tests offten have similar test-setups with only small variations.
when i started unittesting i used copy&paste to create a new test from an existing.
every test had a long test-setup.
after changes in the system-under-test i had to update many tests.
today i use test-datafactories where only the difference to the standard is assigned.
example
void customerUnder18ShouldNotbeGrantedAccess() {
// Arrange
Customer customer = TestdataFactory.CreateStandardCustomer();
customer.setAge(16);
// Act...
// Assert ..
}
Here are more usefull tips.
Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.
I tried looking through all the pages about unit tests and could not find this question. If this is a duplicate, please let me know and I will delete it.
I was recently tasked to help implement unit testing at my company. I realized that I could unit test all the Oracle PL/SQL code, Java code, HTML, JavaScript, XML, XSLT, and more.
Is there such a thing as too much unit testing? Should I write unit tests for everything above or is that overkill?
This depends on the project and its tolerance for failure. There is no single answer. If you can risk a bug, then don't test everything.
When you have tons of tests, it is also likely you will have bugs in your tests. Adding to your headaches.
test what needs testing, leave what does not which often leaves the fairly simple stuff.
Is there such as thing as too much unit testing?
Sure. The problem is finding the right balance between enough unit testing to cover the important areas of functionality, and focusing effort on creating new value for your customers in the terms of system functionality.
Unit testing code vs. leaving code uncovered by tests both have a cost.
The cost of excluding code from unit testing may include (but aren't limited to):
Increased development time due to fixing issues you can't automatically test
Fixing problems discovered during QA testing
Fixing problems discovered when the code reaches your customers
Loss of revenue due to customer dissatisfaction with defects that made it through testing
The costs of writing a unit test include (but aren't limited to):
Writing the original unit test
Maintaining the unit test as your system evolves
Refining the unit test to cover more conditions as you discover them in testing or production
Refactoring unit tests as the underlying code under test is refactored
Lost revenue when it takes longer for you application to reach enter the market
The opportunity cost of implementing features that could drive sales
You have to make your best judgement about what these costs are likely to be, and what your tolerance is for absorbing such costs.
In general, unit testing costs are mostly absorbed during the development phase of a system - and somewhat during it's maintenance. If you spend too much time writing unit tests you may miss a valuable window of opportunity to get your product to market. This could cost you sales or even long-term revenue if you operate in a competitive industry.
The cost of defects is absorbed during the entire lifetime of your system in production - up until the point the defect is corrected. And potentially, even beyond that, if they defect is significant enough that it affects your company's reputation or market position.
Kent Beck of JUnit and JUnitMax fame answered a similar question of mine.
The question has slightly different semantics but the answer is definitely relevant
The purpose of Unit tests is generally to make it possibly to refector or change with greater assurance that you did not break anything. If a change is scary because you do not know if you will break anything, you probably need to add a test. If a change is tedious because it will break a lot of tests, you probably have too many test (or too fragile a test).
The most obvious case is the UI. What makes a UI look good is something that is hard to test, and using a master example tends to be fragile. So the layer of the UI involving the look of something tends not to be tested.
The other times it might not be worth it is if the test is very hard to write and the safety it gives is minimal.
For HTML I tended to check that the data I wanted was there (using XPath queries), but did not test the entire HTML. Similarly for XSLT and XML. In JavaScript, when I could I tested libraries but left the main page alone (except that I moved most code into libraries). If the JavaScript is particularly complicated I would test more. For databases I would look into testing stored procedures and possibly views; the rest is more declarative.
However, in your case first start with the stuff that worries you the most or is about to change, especially if it is not too difficult to test. Check the book Working Effectively with Legacy Code for more help.
Yes, there is such a thing as too much unit testing. One example would be unit testing in a whitebox manner, such that you're effectively testing the specific implementation; such testing would effectively slow down progress and refactoring by requiring compliant code to need new unit tests (because the tests were dependent upon specific implementation details).
I suggest that in some situations you might want automated testing, but no 'unit' testing at all (Should one test internal implementation, or only test public behaviour?), and that any time spent writing unit tests would be better spent writing system tests.
While more tests is usually better (I have yet to be on a project that actually had too many tests), there's a point at which the ROI bottoms out, and you should move on. I'm assuming you have finite time to work on this project, by the way. ;)
Adding unit tests has some amount of diminishing returns -- after a certain point (Code Complete has some theories), you're better off spending your finite amount of time on something else. That may be more testing/quality activities like refactoring and code review, usability testing with real human users, etc., or it could be spent on other things like new features, or user experience polish.
As EJD said, you can't verify the absence of errors.
This means there are always more tests you could write. Any of these could be useful.
What you need to understand is that unit-testing (and other types of automated testing you use for development purposes) can help with development, but should never be viewed as a replacement for formal QA.
Some tests are much more valuable than others.
There are parts of your code that change a lot more frequently, are more prone to break, etc. These are the most economical tests.
You need to balance out the amount of testing you agree to take on as a developer. You can easily overburden yourself with unmaintainable tests. IMO, unmaintainable tests are worse than no tests because they:
Turn others off from trying to maintain a test suite or write new tests.
Detract from you adding new, meaningful functionality. If automated testing is not a net-positive result, you should ditch it like other engineering practices.
What should I test?
Test the "Happy Path" - this ensures that you get interactions right, and that things are wired together properly. But you don't adequately test a bridge by driving down it on a sunny day with no traffic.
Pragmatic Unit Testing recommends you use Right-BICEP to figure out what to test. "Right" for the happy path, then Boundary conditions, check any Inverse relationships, use another method (if it exists) to Cross-check results, force Error conditions, and finally take into account any Performance considerations that should be verified. I'd say if you are thinking about tests to write in this way, you're most likely figure out how to get to an adequate level of testing. You'll be able to figure out which ones are more useful and when. See the book for much more info.
Test at the right level
As others have mentioned, unit tests are not the only way to write automated tests. Other types of frameworks may be built off of unit tests, but provide mechanisms to do package level, system or integration tests. The best bang for the buck may be at a higher level, and just using unit testing to verify a single component's happy path.
Don't be discouraged
I'm painting a more grim picture here than I expect most developers will find in reality. The bottom line is that you make a commitment to learn how to write tests and write them well. But don't let fear of the unknown scare you into not writing any tests. Unlike production code, tests can be ditched and rewritten without many adverse effects.
Unit test any code that you think might change.
You should only really write unit tests for any code which you have written yourself. There is no need to test the functionality inherently provided to you.
For example, If you've been given a library with an add function, you should not be testing that add(1,2) returns 3. Now if you've WRITTEN that code, then yes, you should be testing it.
Of course, whoever wrote the library may not have tested it and it may not work... in which case you should write it yourself or get a separate one with the same functionality.
Well, you certainly shouldn't unit test everything, but at least the complicated tasks or those that will most likely contain errors/cases you haven't thought of.
The point of unit testing is being able to run a quick set of tests to verify that your code is correct. This lets you verify that your code matches your specification and also lets you make changes and ensure that they don't break anything.
Use your judgement. You don't want to spend all of your time writing unit tests or you won't have any time to write actual code to test.
When you've unit tested your unit tests, thinking you have then provided 200% coverage.
There is a development approach called test-driven development which essentially says that there is no such thing as too much (non-redundant) unit testing. That approach, however, is not a testing approach, but rather a design approach which relies on working code and a more or less complete unit test suite with tests which drive every single decision made about the codebase.
In a non-TDD situation, automated tests should exercise every line of code you write (in particular Branch coverage is good), but even then there are exceptions - you shouldn't be testing vendor-supplied platform or framework code unless you know for certain that there are bugs which will affect you in that platform. You shouldn't be testing thin wrappers (or, equally, if you need to test it, the wrapper is not thin). You should be testing all core business logic, and it is certainly helpful to have some set of tests that exercise your database at some elemental level, although those tests will never work in the common situation where unit tests are run every time you compile.
Specifically with regard to database testing is intrinsically slow, and depending on how much logic is held in your database, quite difficult to get right. Typically things like dbs, HTML/XML documents & templating, and other document-ish aspects of a program are verified moreso than tested. The difference is usually that testing tries to exercise execution paths whereas verification tries to verify inputs and outputs directly.
To learn more about this I would suggest reading up on "Code Coverage". There is a lot of material available if you're curious about this.
If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?
For instance if I have unit tests and acceptance tests for a feature do I still need integration tests or should the unit and acceptance tests cover the same ground? Is there overlap between test types?
I'm talking about automated tests here. I know manual testing is still needed for things like ease of use, etc.
If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?
No. Tests can only verify what you have thought of. Not what you haven't thought of.
I'd recommend reading chapters 20 - 22 in the 2nd edition of Code Complete. It covers software quality very well.
Here's a quick breakdown of some of the key points (all credit goes to McConnell, 2004)
Chapter 20 - The Software-Quality Landscape:
No single defect-detection technique is completely effective by itself
The earlier you find a defect, the less intertwined it will become with the rest of your code and the less damage it will cause
Chapter 21 - Collaborative Construction:
Collaborative development practices tend to find a higher percentage of defects than testing and to find them more efficiently
Collaborative development practices tend to find different kinds of errors than testing does, implying that you need to use both reviews and testing to ensure the quality of your software
Pair programming typically costs the about the same as inspections and produces similar quality code
Chapter 22 - Developer Testing:
Automated testing is useful in general and is essential for regression testing
The best way to improve your testing process is to make it regular, measure it, and use what you learn to improve it
Writing test cases before the code takes the same amount of time and effort as writing the test cases after the code, but it shortens defect-detection-debug-correction-cycles (Test Driven Development)
As far as how you are formulating your unit tests, you should consider basis testing, data-flow analysis, boundary analysis etc. All of these are explained in great detail in the book (which also includes many other references for further reading).
Maybe this isn't exactly what you were asking, but I would say automated testing is definitely not enough of a strategy. You should also consider such things as pair programming, formal reviews (or informal reviews, depending on the size of the project) and test scaffolding along with your automated testing (unit tests, regression testing etc.).
The idea of multiple testing cycles is to catch problems as early as possible when things change.
Unit tests should be done by the developers to ensure the units work in isolation.
Acceptance tests should be done by the client to ensure the system meets the requirements.
However, something has changed between those two points that should also be tested. That's the integration of units into a product before being given to the client.
That's something that should first be tested by the product creator, not the client. The minute you invlove the client, things slow down so the more fixes you can do before they get their grubby little hands on it, the better.
In a big shop (like ours), there are unit tests, integration tests, globalization tests, master-build tests and so on at each point where the deliverable product changes. Only once all high severity bugs are fixed (and a plan for fixing low priority bugs is in place) do we unleash the product to our beta clients.
We do not want to give them a dodgy product simply because fixing a bug at that stage is a lot more expensive (especially in terms of administrivia) than anything we do in-house.
It's really impossible to know whether or not you have enough tests based simply on whether you have a test for every method and feature. Typically I will combine testing with coverage analysis to ensure that all of my code paths are exercised in my unit tests. Even this is not really enough, but it can be a guide to where you may have introduced code that isn't exercised by your tests. This should be an indication that more tests need to be written or, if you're doing TDD, you need to slow down and be more disciplined. :-)
Tests should cover both good and bad paths, especially in unit tests. Your acceptance tests may be more or less concerned with the bad path behavior but should at least address common errors that may be made. Depending on how complete your stories are, the acceptance tests may or may not be adequate. Often there is a many-to-one relationship between acceptance tests and stories. If you only have one automated acceptance test for every story, you probably don't have enough unless you have different stories for alternate paths.
Multiple layers of testing can be very useful. Unit tests to make sure the pieces behave; integration to show that clusters of cooperating units cooperate as expected, and "acceptance" tests to show that the program functions as expected. Each can catch problems during development. Overlap per se isn't a bad thing, though too much of it becomes waste.
That said, the sad truth is that you can never ensure that the product behaves "as expected", because expectation is a fickle, human thing that gets translated very poorly onto paper. Good test coverage won't prevent a customer from saying "that's not quite what I had in mind...". Frequent feedback loops help there. Consider frequent demos as a "sanity test" to add to your manual mix.
Probably not, unless your software is really, really simple and has only one component.
Unit tests are very specific, and you should cover everything thoroughly with them. Go for high code-coverage here. However, they only cover one piece of functionality at a time and not how things work together. Acceptance tests should cover only what the customer really cares about at a high level, and while it will catch some bugs in how things work together, it won't catch everything as the person writing such tests will not know about the system in depth.
Most importantly, these tests may not be written by a tester. Unit tests should be written by developers and run frequently (up to every couple minutes, depending on coding style) by the devs (and by the build system too, ideally). Acceptance tests are often written by the customer or someone on behalf of the customer, thinking about what matters to the customer. However, you also need tests written by a tester, thinking like a tester (and not like a dev or customer).
You should also consider the following sorts of tests, which are generally written by testers:
Functional tests, which will cover pieces of functionality. This may include API testing and component-level testing. You will generally want good code-coverage here as well.
Integration tests, which put two or more components together to make sure that they work together. You don't want one component to put out the position in the array where the object is (0-based) when the other component expects the count of the object ("nth object", which is 1-based), for example. Here, the focus is not on code coverage but on coverage of the interfaces (general interfaces, not code interfaces) between components.
System-level testing, where you put everything together and make sure it works end-to-end.
Testing for non-functional features, like performance, reliability, scalability, security, and user-friendliness (there are others; not all will relate to every project).
Integration tests are for when your code integrates with other systems such as 3rd party applications, or other in house systems such as the environment, database etc. Use integration tests to ensure that the behavior of the code is still as expected.
In short no.
To begin with, your story cards should have acceptance criteria. That is, acceptance criteria specified by the product owner in conjunction with the analyst specifying the behavior required and if meet, the story card will be accepted.
The acceptance criteria should drive the automated unit test (done via TDD) and the automated regression/ functional tests which should be run daily. Remember we want to move defects to the left, that is, the sooner we find ‘em the cheaper and faster they are to fix. Furthermore, continuous testing enables us to refactor with confidence. This is required to maintain a sustainable pace for development.
In addition, you need automated performance test. Running a profiler daily or overnight would provide insight into the consumption of CPU and memory and if any memory leaks exist. Furthermore, a tool like loadrunner will enable you to place a load on the system that reflects actual usage. You will be able to measure response times and CPU and memory consumption on the production like machine running loadrunner.
The automated performance test should reflect actual usage of the app. You measure the number of business transactions (i.e., if a web application the clicking on a page and the response to the users or round trips to the server). and determine the mix of such transaction along with the reate they arrive per second. Such information will enable you to design properly the automated loadrunner test required to performance test the application. As is often the case, some of the performance issues will trace back to the implementation of the application while other will be determined by the configuration of the server environment.
Remember, your application will be performance tested. The question is, will the first performance test happen before or after you release the software. Believe me, the worse place to have a performance problem is in production. Performance issues can be the hardest to fix and can cause a deployed to all users to fail thus cancelling the project.
Finally, there is User Acceptance Testing (UAT). These are test designed by the production owner/ business partner to test the overall system prior to release. In generally, because of all the other testing, it is not uncommon for the application to return zero defects during UAT.
It depends on how complex your system is. If your acceptance tests (which satisfy the customers requirements) exercise your system from front to back, then no you don't.
However, if your product relies on other tiers (like backend middleware/database) then you do need a test that proves that your product can happily link up end-to-end.
As other people have commented, tests don't necessarily prove the project functions as expected, just how you expect it to work.
Frequent feedback loops to the customer and/or tests that are written/parsable in a way the customer understands (say for example in a BDD style ) can really help.
If I have unit tests for each class
and/or member function and acceptance
tests for every user story do I have
enough tests to ensure the project
functions as expected?
This is enough to show your software is functionally correct, at least as much as your test coverage is sufficient. Now, depending on what you're developing, there certainly are non-functional requirements that matter, think about reliability, performance and scability.
Technically, a full suit of acceptance tests should cover everything. That being said, they're not "enough" for most definitions of enough. By having unit tests and integration tests, you can catch bugs/issues earlier and in a more localized manner, making them much easier to analyze and fix.
Consider that a full suit of manually executed tests, with the directions written on paper, would be enough to validate that everything works as expected. However, if you can automate the tests, you'd be much better off because it makes doing the testing that much easier. The paper version is "complete", but not "enough". In the same way, each layer of tests add more to the value of "enough".
It's also worth noting that the different sets of tests tend to test the product/code from a different "viewpoint". Much the same way QA may pick up bugs that dev never thought to test for, one set of tests may find things the other set wouldn't.
Acceptance testing can even be made manually by the client if the system in hand is small.
Unit and small integration tests (consisting of unit like tests) are there for you to build a sustainable system.
Don't try to write test for each part of the system. That is brittle (easy to break) and overwhelming.
Decide on the critical parts of the system that takes too much amount of time to manually test and write acceptance tests only for that parts to make things easy for everyone.
I just had a conversation with my lead developer who disagreed that unit tests are all that necessary or important. In his view, functional tests with a high enough code coverage should be enough since any inner refactorings (interface changes, etc.) will not lead to the tests being needed to be rewritten or looked over again.
I tried explaining but didn't get very far, and thought you guys could do better. ;-) So...
What are some good reasons to unit test code that functional tests don't offer? What dangers are there if all you have are functional tests?
Edit #1 Thanks for all the great answers. I wanted to add that by functional tests I don't mean only tests on the entire product, but rather also tests on modules within the product, just not on the low level of a unit test with mocking if necessary, etc. Note also that our functional tests are automatic, and are continuously running, but they just take longer than unit tests (which is one of the big advantages of unit tests).
I like the brick vs. house example. I guess what my lead developer is saying is testing the walls of the house is enough, you don't need to test the individual bricks... :-)
Off the top of my head
Unit tests are repeatable without effort. Write once, run thousands of times, no human effort required, and much faster feedback than you get from a functional test
Unit tests test small units, so immediately point to the correct "sector" in which the error occurs. Functional tests point out errors, but they can be caused by plenty of modules, even in co-operation.
I'd hardly call an interface change "an inner refactoring". Interface changes tend to break a lot of code, and (in my opinion) force a new test loop rather than none.
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are checking that you've manufactured your bricks correctly
functional tests are checking that the house meets the customer's needs.
They're different things, but the latter will be much easier, if the former has been carried out.
It can be a lot more difficult to find the source of problems if a functional test fails, because you're effectively testing the entire codebase every time. By contrast, unit tests compartmentalize the potential problem areas. If all the other unit tests succeed but this one, you have an assurance that the problem is in the code you're testing and not elsewhere.
Bugs should be caught as soon as possible in the development cycle - having bugs move from design to code, or code to test, or (hopefully not) test to production increases the cost and time required to fix it.
Our shop enforces unit testing for that reason alone (I'm sure there are other reasons but that's enough for us).
If you use a pure Extreme Programing / Agile Development methodology the Unit tests are always required as they are the requirements for development.
In pure XP/Agile one makes all requirements based on the tests which are going to be performed to the application
Functional tests - Generate functional requirements.
Unit tests - Generate functions or object requirements.
Other than that Unit testing can be used to keep a persistent track of function requirements.
i.e. If you need to change the working way of a function but the input fields and output keep untouched. Then unit testing is the best way to keep tracking of possible problems as you only need to run the tests.
In TDD/BDD, unit tests are necessary to write the program. The process goes
failing test -> code -> passing test -> refactor -> repeat
The article linked also mentions the benefits of TDD/BDD. In summary:
Comes very close to eliminating the use of a debugger (I only use it in tests now and very rarely for those)
Code can't stay messy for longer than a few minutes
Documentation examples for an API built-in
Forces loose coupling
The link also has a (silly) walk-through example of TDD/BDD, but it's in PowerPoint (ew), so here's an html version.
Assume for a second that you already have a thorough set of functional tests that check every possible use case available and you are considering adding unit tests. Since the functional tests will catch all possible bugs, the unit tests will not help catch bugs. There are however, some tradeoffs to using functional tests exclusively compared to a combination of unit tests, integration tests, and functional tests.
Unit tests run faster. If you've ever worked on a big project where the test suite takes hours to run, you can understand why fast tests are important.
In my experience, practically speaking, functional tests are more likely to be flaky. For example, sometimes the headless capybara-webkit browser just can't reach your test server for some reason, but you re-run it and it works fine.
Unit tests are easier to debug. Assuming that the unit test has caught a bug, it's easier and faster to pinpoint exactly where the problem is.
On the other hand, assuming you decide to just keep your functional tests and not add any unit tests
If you ever need to re-architect the entire system, you may not have to rewrite any tests. If you had unit tests, a lot of them will probably be deleted or rewritten.
If you ever need to re-architect the entire system, you won't have to worry about regressions. If you had relied on unit tests to cover corner cases, but you were forced to delete or rewrite those unit tests, your new unit tests are more likely to have mistakes in them than the old unit tests.
Once you already have the functional test environment set up and you have gotten over the learning curve, writing additional functional tests is often easier to write and often easier to write correctly than a combination of unit tests, integration tests, and functional tests.