Can you perform unit / integration tests without creating test codes? - unit-testing

In our project, test procedures and expected test results (test specifications)
are created in a document.
Then, we perform testing on a built product / release.
Here, no test codes nor test tools are involved.
Is this acceptable for unit / integration testing?

What you are doing is "manual testing".
Manual testing per definition is not, and can never be unit testing.
Manual testing can be used for integration testing, and in fact should be used to some degree, because automated tests cannot spot all forms of unexpected error conditions. Especially bugs having to do with layout and things not "looking right" (which happen to be rather common in web apps).
However, if you have no automated tests at all, it means that your application is insufficiently tested, period. Because it's completely infeasible to manually test every detailed aspect of an application for every release - no organization would be willing or able to pay for the effort that would require.

Is this acceptable for unit /
integration testing?
No. what you describe is neither unit nor integration testing, it is taking the build for a walk around the block to get a cup of coffee.

Unit testing is - as I understand it - the testing of individual units of code. Relatively low level and usually developed at the same time as the code itself.
To do that you need to be working in code as well and ultimately the code that performs these tests is testing tool even if for some reason you aren't using a framework.
So no, if you aren't using testing tools or testing code, you aren't doing unit testing.
Theoretically you could be doing integration testing manually, but it's still unreliable because people tend to be inconsistent and expensive because people are slower than machines.
Ultimately the more testing you can automate, the faster and more accurate your tests will be and the more you will free your QA personel to test things that can only be tested manually.

Unit and Integration testing are two very different things, and what constitutes "acceptable" depends entirely on your organisation. It may very well be acceptable to test the system, rather than each unit separately.
Personally, I'm not a fan of automated unit testing, since the vast majority of issues I encounter are the sort of things that only ever come to light in the context of a system test.
I tend to develop incrementally, so that as what I'm working on grows, it becomes its own test harness, and foundations are proved to be solid before building anything on them.
I'd love to be able to automate system testing. It reveals all the things I would never have thought of in a million years of writing unit tests.

Related

unit testing vs integration testing & its maintenance

I am working on a web application which is constantly enhancing in parallel development environment(develop two requirements in two different environments and merge first code base to second, when first requirement is released to production).
My question is about having both integrating testing and unit testing for app and its maintenance.
Unit testing with mocking makes difficult to maintain tests in parallel development, integration testing(using selenium) in parallel development makes difficult to maintain required data in database(which may be easy than fixing a failed unit test)
I am leaning towards integration testing, as merging code will not break use case, but unit test case may fail by merging code because of expectations.
The app is little old and not properly designed For unit Testing and refactoring code and maintaining unit test cases is becoming hard.Please suggest a better approach for testing.
Unit tests and integration tests both have their place.
Unit tests, as the name indicates, verifies the unit is behaving as expected.
It's true that integration tests cover the same code that is covered by the unit tests. But unit tests help you pin point issues more easily. Instead of investigating the failure to understand which part of the system is responsible for the issue - you have a failing unit test to help you find out.
Another reason to have unit tests is speed. Unit tests should be fast. They should not rely on various system dependencies (you should be using mocks and stubs for that). If you have unit tests with good coverage you get feedback fast regarding the quality of the system - before you start your long test cycle.
Actually you usually employ various level of automated tests:
Smoke tests. This are unit tests that test various part of the system in the most basic scenarios. They are usually employed as part of a gated check-in that don't check in bad code. You need this to be fast.
Regression - unit tests. Usually part of continuous integration. Again you need this to be as fast as possible so that the build will not take too long.
Full regression + integration tests. These are more system tests that take longer to run. These usually run once a day in a nightly build or even less frequently (depending on length)
If you find the cost of maintaining some types of tests too high, it's sensible to consider dropping them. I think integration tests are more important for QA, and if I could only choose one type of testing (unit vs integration) that's what I would go with.
However, first you might also want to consider if there is a way to factor your unit tests to avoid these maintenance issues. When I first started using mocks, it took me a while to find the right balance. In my experience I find it best to avoid mocks (with expectations) as much as much as possible. I do use mocking libraries, though, just mostly for trivial stubbing rather than more complex mocking. I wrote a blog post about this a while back if you are interested.
W/ regard to the unit test issue:
I would suspect that the unit tests you are constantly having to refactor are a bit too low of a level. Generally it is better to have somewhat higher level tests that exercise the lower level code by varying the inputs.
Assuming there is not unnecessary code, the higher level tests should still provide good code coverage (if they can't reach code, why is the code there?).
W/ regard to the functional test issue:
You may want to consider a representative data sample (or several). That way you can have a known input, so you should be get predictable output.
Avoid unit testing altogether. Or start with Unit testing and once you reach integration tests "pass the relay baton" to integration testing. Delete the unit tests previously written or leave some for documentation.
Code coverage cannot be measured with integration testing(as it passes through different systems.)
Eventhough mainiting both unit tests and integration tests are ideal theoretically let us not risk it says the experienced.
Only an advanced unit tester can become integration tester. So only those who have become integration testers can advice against unit testing. We better listen to them because they are passionate about unit testing and are willing to sacrificing the 'fun' of writing unit tests for better 'safety net' for the team.
Integration tests are difficult to implement (seeding DB, infra recreation ,etc) and we simply wont have time to maintain both(unit and intgration tests)
Unit testing can still be good for library(.dll), framework development, complex calculations and also for product development companies. But how many of us work on these? These days for web develoment, everyone can work in end-to-end scenarious easily as there are frameworks already available. For this integration tests are best anyways.
Some helpful links:
"Why You Need to Stop Writing Unit Tests" https://hackernoon.com/why-you-need-to-stop-writing-unit-tests
"Write tests. Not too many. Mostly integration."
https://kentcdodds.com/blog/write-tests
"The No. 1 unit testing best practice: Stop doing it"
https://techbeacon.com/app-dev-testing/no-1-unit-testing-best-practice-stop-doing-it
"Unit test kills"
https://www.linkedin.com/pulse/before-you-feed-unit-test-beast-s-a-n-j-ay-mohan/?trackingId=5eIFIJGSBpnGuXMEz2PnwQ%3D%3D

Why using Integration tests instead of unit tests is a bad idea?

Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.

Are there situations where unit tests are detrimental to code?

Most of the discussion on this site is very positive about unit testing. I'm a fan of unit testing myself. However, I've found extensive unit testing brings its own challenges. For example, unit tests are often closely coupled to the code they test, which can make API changes increasingly costly as the volume of tests grows.
Have you found real-world situations where unit tests have been detrimental to code quality or time to delivery? How have you dealt with these situations? Are there any 'best practices' which can be applied to the design and implementation of unit tests?
There is a somewhat related question here: Why didn't unit testing work out for your project?
With extensive unit testing you will start to find that refactoring operations are more expensive for exactly the reasons you said.
IMHO this is a good thing. Expensive and big changes to an API should have a bigger cost relative to small and cheap changes. Refactoring is not a free operation and it's important to understand the impact to both yourself and consumers of your API. Unit Tests are great ruler for measuring how expensive an API change will be to consume.
Part of this problem though is relieved by tooling. Most IDEs directly or indirectly (via plugins) support refactoring operations in their code base. Using these operations to change your unit tests will relieve a bit of the pain.
Are there any 'best practices' which
can be applied to the design and
implementation of unit tests?
Make sure your unit tests haven't become integration tests. For example if you have unit tests for a class Foo, then ideally the tests can only break if
there was a change in Foo
or there was a change in the interfaces used by Foo
or there was a change in the domain model (typically you'll have some classes like "Customer" which are central to the problem space, have no room for abstraction and are therefore not hidden behind an interface)
If your tests are failing because of any other changes, then they have become integration tests and you'll get in trouble as the system grows bigger. Unit tests should have no such scalability issues because they test an isolated unit of code.
One of the projects I worked on was heavily unit-tested; we had over 1000 unit tests for 20 or so classes. There was slightly more test code than production code. The unit tests caught innumerable errors introduced during refactoring operations; they definitely made it easy and safe to make changes, extend functionality etc. The released code had a very low bug rate.
To encourage ourselves to write the unit tests, we specifically chose to keep them 'quick and dirty' - we would bash out a test as we produced the project code, and the tests were boring and 'not real code', so as soon as we wrote one that exercised the functionality of the production code, we were done, and moved on. The only criteria for the test code was that it fully exercised the API of the production code.
What we learnt the hard way is that this approach does not scale. As the code evolved, we saw a need to change the communication pattern between our objects, and suddenly I had 600 failing unit tests! Fixing this took me several days. This level of test breakage happened two or three times with further major architecture refactorings. In each case I don't believe we could reasonably have foreseen the code evolution that was required beforehand.
The moral of the story for me was this: unit-testing code needs to be just as clean as production code. You simply can't get away with cuttting and pasting in unit tests. You need to apply sensible refactoring, and decouple your tests from the production code where possible by using proxy objects.
Of course all of this adds some complexity and cost to your unit tests (and can introduce bugs to your tests!), so it's a fine balance. But I do believe that the concept of 'unit tests', taken in isolation, is not the clear and unambiguous win it's often made out to be. My experience is that unit tests, like everything else in programming, require care, and are not a methodology that can be applied blindly. It's therefore surprising to me that I've not seen more discussion of this topic on forums like this one and in the literature.
Mostly in cases where the system was developed without unit testing in mind, it was an afterthought and not a design tool. When you develop with automated tests the chances of breaking your API diminishes.
An excess of false positives can slow development down, so it's important to test for what you actually want to remain invariant. This usually means writing unit tests in advance for requirements, then following up with more detailed unit tests to detect unexpected shifts in output.
I think you're looking at fixing a symptom, rather than recognizing the whole of the problem. The root problem is that a true API is a published interface*, and it should be subject to the same bounds that you would place on any programming contract: no changes! You can add to an API, and call it API v2, but you can't go back and change API v1.0, otherwise you have indeed broken backward compatibility, which is almost always a bad thing for an API to do.
(* I don't mean to call out any specific interfacing technology or language, interface can mean anything from the class declarations on up.)
I would suggest that a Test Driven Development approach would help prevent many of these kinds of problems in the first place. With TDD you would be "feeling" the awkwardness of the interfaces while you were writing the tests, and you would be compelled to fix those interfaces earlier in the process rather than waiting until after you've written a thousand tests.
One of the primary benefits of Test Driven Development is that it gives you instant feedback on the programmatic use of your class/interface. The act of writing a test is a test of your design, while the act of running the test is the test of your behavior. If it's difficult or awkward to write a test for a particular method, then that method is likely to be used incorrectly, meaning it's a weak design and it should be refactored quickly.
Yes there are situations where unit testing can be detrimental to code quality and delivery time. If you create too many unit test your code will become mangled with interfaces and your code quality as a whole will suffer. Abstraction is great but you can have too much of it.
If your writing unit tests for a prototype or a system that has a high chance of having major changes your unit test will have an effect on time to delivery. In these cases it's often better to write acceptance test which test closer to end to end.
If you're sure your code won't be reused, won't need to be mantained, your project is simple and very short term; then you shouldn't need unit tests.
Unit tests are useful to facilitate changes and maintenance. They really add a little in time to delivery, but it is paid in the medium / long term. If there is no medium / long term, it may be unnecessary, being the manual tests enough.
But all of this is very unlikely, though. So they are still a trend :)
Also, sometimes might be a necessary business decision to invest less time in testing, in order to have a faster urgent delivery (which will need to be paid with interest later)
Slow unit tests can often be detrimental to development. This usually happens when unit tests become integration tests that need to hit web services or the database. If your suite of unit tests takes over an hour to run, often times you'll find yourself and your team essentially paralyzed for that hour waiting to see if the unit tests pass or not (since you don't want to keep building upon a broken foundation).
With that being said, I think the benefits far outweigh the drawbacks in all but the most contrived cases.

Are unit tests and acceptance tests enough?

If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?
For instance if I have unit tests and acceptance tests for a feature do I still need integration tests or should the unit and acceptance tests cover the same ground? Is there overlap between test types?
I'm talking about automated tests here. I know manual testing is still needed for things like ease of use, etc.
If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?
No. Tests can only verify what you have thought of. Not what you haven't thought of.
I'd recommend reading chapters 20 - 22 in the 2nd edition of Code Complete. It covers software quality very well.
Here's a quick breakdown of some of the key points (all credit goes to McConnell, 2004)
Chapter 20 - The Software-Quality Landscape:
No single defect-detection technique is completely effective by itself
The earlier you find a defect, the less intertwined it will become with the rest of your code and the less damage it will cause
Chapter 21 - Collaborative Construction:
Collaborative development practices tend to find a higher percentage of defects than testing and to find them more efficiently
Collaborative development practices tend to find different kinds of errors than testing does, implying that you need to use both reviews and testing to ensure the quality of your software
Pair programming typically costs the about the same as inspections and produces similar quality code
Chapter 22 - Developer Testing:
Automated testing is useful in general and is essential for regression testing
The best way to improve your testing process is to make it regular, measure it, and use what you learn to improve it
Writing test cases before the code takes the same amount of time and effort as writing the test cases after the code, but it shortens defect-detection-debug-correction-cycles (Test Driven Development)
As far as how you are formulating your unit tests, you should consider basis testing, data-flow analysis, boundary analysis etc. All of these are explained in great detail in the book (which also includes many other references for further reading).
Maybe this isn't exactly what you were asking, but I would say automated testing is definitely not enough of a strategy. You should also consider such things as pair programming, formal reviews (or informal reviews, depending on the size of the project) and test scaffolding along with your automated testing (unit tests, regression testing etc.).
The idea of multiple testing cycles is to catch problems as early as possible when things change.
Unit tests should be done by the developers to ensure the units work in isolation.
Acceptance tests should be done by the client to ensure the system meets the requirements.
However, something has changed between those two points that should also be tested. That's the integration of units into a product before being given to the client.
That's something that should first be tested by the product creator, not the client. The minute you invlove the client, things slow down so the more fixes you can do before they get their grubby little hands on it, the better.
In a big shop (like ours), there are unit tests, integration tests, globalization tests, master-build tests and so on at each point where the deliverable product changes. Only once all high severity bugs are fixed (and a plan for fixing low priority bugs is in place) do we unleash the product to our beta clients.
We do not want to give them a dodgy product simply because fixing a bug at that stage is a lot more expensive (especially in terms of administrivia) than anything we do in-house.
It's really impossible to know whether or not you have enough tests based simply on whether you have a test for every method and feature. Typically I will combine testing with coverage analysis to ensure that all of my code paths are exercised in my unit tests. Even this is not really enough, but it can be a guide to where you may have introduced code that isn't exercised by your tests. This should be an indication that more tests need to be written or, if you're doing TDD, you need to slow down and be more disciplined. :-)
Tests should cover both good and bad paths, especially in unit tests. Your acceptance tests may be more or less concerned with the bad path behavior but should at least address common errors that may be made. Depending on how complete your stories are, the acceptance tests may or may not be adequate. Often there is a many-to-one relationship between acceptance tests and stories. If you only have one automated acceptance test for every story, you probably don't have enough unless you have different stories for alternate paths.
Multiple layers of testing can be very useful. Unit tests to make sure the pieces behave; integration to show that clusters of cooperating units cooperate as expected, and "acceptance" tests to show that the program functions as expected. Each can catch problems during development. Overlap per se isn't a bad thing, though too much of it becomes waste.
That said, the sad truth is that you can never ensure that the product behaves "as expected", because expectation is a fickle, human thing that gets translated very poorly onto paper. Good test coverage won't prevent a customer from saying "that's not quite what I had in mind...". Frequent feedback loops help there. Consider frequent demos as a "sanity test" to add to your manual mix.
Probably not, unless your software is really, really simple and has only one component.
Unit tests are very specific, and you should cover everything thoroughly with them. Go for high code-coverage here. However, they only cover one piece of functionality at a time and not how things work together. Acceptance tests should cover only what the customer really cares about at a high level, and while it will catch some bugs in how things work together, it won't catch everything as the person writing such tests will not know about the system in depth.
Most importantly, these tests may not be written by a tester. Unit tests should be written by developers and run frequently (up to every couple minutes, depending on coding style) by the devs (and by the build system too, ideally). Acceptance tests are often written by the customer or someone on behalf of the customer, thinking about what matters to the customer. However, you also need tests written by a tester, thinking like a tester (and not like a dev or customer).
You should also consider the following sorts of tests, which are generally written by testers:
Functional tests, which will cover pieces of functionality. This may include API testing and component-level testing. You will generally want good code-coverage here as well.
Integration tests, which put two or more components together to make sure that they work together. You don't want one component to put out the position in the array where the object is (0-based) when the other component expects the count of the object ("nth object", which is 1-based), for example. Here, the focus is not on code coverage but on coverage of the interfaces (general interfaces, not code interfaces) between components.
System-level testing, where you put everything together and make sure it works end-to-end.
Testing for non-functional features, like performance, reliability, scalability, security, and user-friendliness (there are others; not all will relate to every project).
Integration tests are for when your code integrates with other systems such as 3rd party applications, or other in house systems such as the environment, database etc. Use integration tests to ensure that the behavior of the code is still as expected.
In short no.
To begin with, your story cards should have acceptance criteria. That is, acceptance criteria specified by the product owner in conjunction with the analyst specifying the behavior required and if meet, the story card will be accepted.
The acceptance criteria should drive the automated unit test (done via TDD) and the automated regression/ functional tests which should be run daily. Remember we want to move defects to the left, that is, the sooner we find ‘em the cheaper and faster they are to fix. Furthermore, continuous testing enables us to refactor with confidence. This is required to maintain a sustainable pace for development.
In addition, you need automated performance test. Running a profiler daily or overnight would provide insight into the consumption of CPU and memory and if any memory leaks exist. Furthermore, a tool like loadrunner will enable you to place a load on the system that reflects actual usage. You will be able to measure response times and CPU and memory consumption on the production like machine running loadrunner.
The automated performance test should reflect actual usage of the app. You measure the number of business transactions (i.e., if a web application the clicking on a page and the response to the users or round trips to the server). and determine the mix of such transaction along with the reate they arrive per second. Such information will enable you to design properly the automated loadrunner test required to performance test the application. As is often the case, some of the performance issues will trace back to the implementation of the application while other will be determined by the configuration of the server environment.
Remember, your application will be performance tested. The question is, will the first performance test happen before or after you release the software. Believe me, the worse place to have a performance problem is in production. Performance issues can be the hardest to fix and can cause a deployed to all users to fail thus cancelling the project.
Finally, there is User Acceptance Testing (UAT). These are test designed by the production owner/ business partner to test the overall system prior to release. In generally, because of all the other testing, it is not uncommon for the application to return zero defects during UAT.
It depends on how complex your system is. If your acceptance tests (which satisfy the customers requirements) exercise your system from front to back, then no you don't.
However, if your product relies on other tiers (like backend middleware/database) then you do need a test that proves that your product can happily link up end-to-end.
As other people have commented, tests don't necessarily prove the project functions as expected, just how you expect it to work.
Frequent feedback loops to the customer and/or tests that are written/parsable in a way the customer understands (say for example in a BDD style ) can really help.
If I have unit tests for each class
and/or member function and acceptance
tests for every user story do I have
enough tests to ensure the project
functions as expected?
This is enough to show your software is functionally correct, at least as much as your test coverage is sufficient. Now, depending on what you're developing, there certainly are non-functional requirements that matter, think about reliability, performance and scability.
Technically, a full suit of acceptance tests should cover everything. That being said, they're not "enough" for most definitions of enough. By having unit tests and integration tests, you can catch bugs/issues earlier and in a more localized manner, making them much easier to analyze and fix.
Consider that a full suit of manually executed tests, with the directions written on paper, would be enough to validate that everything works as expected. However, if you can automate the tests, you'd be much better off because it makes doing the testing that much easier. The paper version is "complete", but not "enough". In the same way, each layer of tests add more to the value of "enough".
It's also worth noting that the different sets of tests tend to test the product/code from a different "viewpoint". Much the same way QA may pick up bugs that dev never thought to test for, one set of tests may find things the other set wouldn't.
Acceptance testing can even be made manually by the client if the system in hand is small.
Unit and small integration tests (consisting of unit like tests) are there for you to build a sustainable system.
Don't try to write test for each part of the system. That is brittle (easy to break) and overwhelming.
Decide on the critical parts of the system that takes too much amount of time to manually test and write acceptance tests only for that parts to make things easy for everyone.

What is the right balance between unit vs. functional testing in a typical web application?

Unit tests are cheaper to write and maintain, but they don't cover all scenarios. What is the right balance between them?
it is important to distinguish between the intent and scope of these two types of tests:
a unit test typically test a specific feature at the module/class level, e.g. create-X, update-Y, foo-the-bar, compact-the-whizbang, etc. One class may have multiple unit tests
a functional test, also called an 'acceptance test' typically tests a use-case scenario from the outermost interface through to the end of processing, e.g. from the user-interface to the database and back again, from the input process to the notification utility, etc.
these two types of tests are not interchangable, and are in general disjoint. So the notion of striking a 'balance' between them makes no sense. You either need them or you don't.
if you are referring to the ease of coding each type of test in your testing framework, that is a different question - but the use of the framework (say, NUnit vs. a user-bot) does not change the type/nature of the test.
the best "balance", in general, would be to unit-test for confidence and completeness, and functional-test for client acceptance
I agree with Steven Lowe that there is no trade-off between unit testing and functional testing, as they are used for very different purposes.
Unit tests are about method and type verification, and also regression testing. Functional tests are about functional, scenario, and feasibility testing. In my opinion, there is almost no overlap,
If it helps, here are my testing categories.
Developers start from the inside and work outwards, focusing on code:
Assertions - verify data flow and structures
Debugger - verify code flow and data
Unit testing - verify each function
Integration testing - verify sub-systems
System testing - verify functionality
Regression tests - verify defects stay fixed
Security tests - verify system can't be penetrated easily.
Testers start from the outside and work inwards, focusing on features:
Acceptance tests - verify end-user requirements
Scenario tests - verify real-world situations
Global tests - verify feasible inputs
Regression tests - verify defects stay fixed
Usability tests - verify that system is easy to use
Security tests - verify system can't be penetrated easily
Code coverage - testing untouched code
Compatibility - with previous releases
Looking for quirks and rough edges.
End-users work from the outside, and usually have little focus:
Acceptance tests - verify end-user requirements
Scenario tests - verify real-world situations
Usability tests - verify that system is easy to use
Looking for quirks and rough edges.
I like Brian Marick's quadrant on automated tests where the distinctions are business vs. technology facing and support programming vs. critique product.
With that framework the question of balance becomes, what do I need right now?
On the app I am working on at the moment, there are probably 10:1 unit to functional tests. The unit test work simply things like retrieving entities from DB, error handling tests for db/network connectivity etc. These things run quick - minutes or less and are run by devs daily.
functional tests while fewer tend to be kitchen sink approach - can user complete order etc. tend to cover the business domain end of things are run by business analsyts and operations - sadly for us, often by hand. These things take weeks to run and usually to finalize a release cycle.
My current project is at about 60% unit test coverage and all user stories have happy-day coverage of user stories in selenium tests, some have additional coverage.
We're continually discussing this: Is there really any point of pushing unit-test coverage of increasingly absurd scenarios for much higher coverage ?
The argument is that expanding selenium tests increases test coverage on things that have business value. Does the customer really care about unit test corner cases that may fail?
When you get good at selenium testing, the cost of writing new tests with business value decreases. For us they're just as simple as unit tests.
The run-time cost is another issue. We have a small cluster of boxes running these tests all of the time.
So we tends to favour web-tests more, probably because we've gotten good at writing them and they provide undeniable business value.
Originally I leaned heavily towards preferring unit tests over functional/acceptance tests due to the initial cost factor of acceptance tests. However, over time I have changed my philosophy and am now a strong proponent of choosing acceptance tests wherever possible and only using unit tests when acceptance tests can't meet my needs.
The basic rational behind choosing acceptance over unit tests is the same as the basic rational for SOLID code. Your implementation should be able to change drastically with refactoring etc, but all business cases - acceptance tests- should be able to remain un-changed and prove acceptable system behavior (tests pass). With unit tests there's often a natural strong coupling of test to implementation code. Even though it's test code, it's still code and strong coupling as-we-know should be avoided. By choosing acceptance tests you're often led down the spiral of success to create well-planned, consumable de-coupled api's letting your implementation change behind the scenes without forcing your tests to change. Also, your developer implementation thoughts are in-line with the business system-behavior thoughts. In the end I find that all of this is better for business and for coder satisfaction.
From a theory standpoint I often ask my-self if a piece of code can't be tested via an acceptance test why that piece of code should exist? IE - if it's not part of a valid business scenario, then does that code add value or is it currently and will it remain solely cost?
Additionally, if you comment/document your acceptance tests well, those comments/documents generally are the most current and accurate language of the system - which usually will let you avoid other less valuable documentation approaches. Unit tests don't lend themselves to that form of "business-term" communication.
Lastly, I haven't formed this view point from just my personal development, it's proven successful with a couple different project teams in my "very corporate" work environment.
JB
http://jb-brown.blogspot.com
Tests should run quickly and help localise problems.
Unit tests allow you to do this by only testing the module in question.
However functional/integration/acceptance tests can be made to run sufficiently quickly in most web scenarios.
I once read that a unit test is an "Executable requirement", which makes perfect sense to me. If your test is not focused on proving a business requirement then it really is of no use. If you have detailed requirements (and that is the acid test) then you will write a number of unit tests to exercise each possible scenario which will in turn ensure the integrity of your data structure algorithms and logic. If you are testing something that is not a requirement but you know must be true for the test to pass then it is more than likely that you are missing requirements.