Does Unit Tests still matter if Functional Tests is satisfied? - unit-testing

For a short summary:
Unit tests are the smaller ones that expects something to do it right in dev's view.
Functional tests are those that expects things are right in user's view
If Functional Tests are already satisfied, do we still need to do Unit Tests?
Most likely (let's use web development as context), this is common by using the browser to see if things are right by letting other users/people try the system/application.
Let us put out other tests like for edge cases.

Are there any metrics that you're using to determined if a functional test is "satisfied"?
I feel like it helps to have an objective measurement to create a baseline to compare types of tests, a common one is code coverage.
By identifying code coverage it's easy to compare functional tests, and unit tests, does the functional test cover the same lines of code as the unit test? If so than it is redundant.
The problem is this ignores a ton of other issues:
Are the functional tests prohibitively long to run? Are they difficult to setup, or are they only executed in CI? Then it may make sense to duplicate lines of code in unit tests, which will provide developers with fast feedback.
Is this a POC or an immature project? Functional tests may provide the best bang for the buck since they should be able to assert higher level use cases and be abstracted from implementation details. This is extremely helpful when implementation details are uncertain
Code coverage is misleading, An IO centric library (ie, DB driver) could easily have 100% code coverage, by mocking its dependencies. If we use code coverage to compare functional tests and unit tests in this case, we'd be missing multiple dimensions of testing, as functional tests will exercise the IO dependencies. (IMO unit testing in this case is actually a pretty small value, and gives false confidence in IO heavy code, resulting in integration bugs that are usually uncovered at later cycles in dev, where it's more expensive to fix issues
You touched on edge cases, functional tests will usually illustrate a couple of client flows. Exercising all the edge cases and error handling using functional tests usually tend to be a waste of resources and end up creating huge suites of difficult to maintain, slow tests

Related

unit testing vs integration testing & its maintenance

I am working on a web application which is constantly enhancing in parallel development environment(develop two requirements in two different environments and merge first code base to second, when first requirement is released to production).
My question is about having both integrating testing and unit testing for app and its maintenance.
Unit testing with mocking makes difficult to maintain tests in parallel development, integration testing(using selenium) in parallel development makes difficult to maintain required data in database(which may be easy than fixing a failed unit test)
I am leaning towards integration testing, as merging code will not break use case, but unit test case may fail by merging code because of expectations.
The app is little old and not properly designed For unit Testing and refactoring code and maintaining unit test cases is becoming hard.Please suggest a better approach for testing.
Unit tests and integration tests both have their place.
Unit tests, as the name indicates, verifies the unit is behaving as expected.
It's true that integration tests cover the same code that is covered by the unit tests. But unit tests help you pin point issues more easily. Instead of investigating the failure to understand which part of the system is responsible for the issue - you have a failing unit test to help you find out.
Another reason to have unit tests is speed. Unit tests should be fast. They should not rely on various system dependencies (you should be using mocks and stubs for that). If you have unit tests with good coverage you get feedback fast regarding the quality of the system - before you start your long test cycle.
Actually you usually employ various level of automated tests:
Smoke tests. This are unit tests that test various part of the system in the most basic scenarios. They are usually employed as part of a gated check-in that don't check in bad code. You need this to be fast.
Regression - unit tests. Usually part of continuous integration. Again you need this to be as fast as possible so that the build will not take too long.
Full regression + integration tests. These are more system tests that take longer to run. These usually run once a day in a nightly build or even less frequently (depending on length)
If you find the cost of maintaining some types of tests too high, it's sensible to consider dropping them. I think integration tests are more important for QA, and if I could only choose one type of testing (unit vs integration) that's what I would go with.
However, first you might also want to consider if there is a way to factor your unit tests to avoid these maintenance issues. When I first started using mocks, it took me a while to find the right balance. In my experience I find it best to avoid mocks (with expectations) as much as much as possible. I do use mocking libraries, though, just mostly for trivial stubbing rather than more complex mocking. I wrote a blog post about this a while back if you are interested.
W/ regard to the unit test issue:
I would suspect that the unit tests you are constantly having to refactor are a bit too low of a level. Generally it is better to have somewhat higher level tests that exercise the lower level code by varying the inputs.
Assuming there is not unnecessary code, the higher level tests should still provide good code coverage (if they can't reach code, why is the code there?).
W/ regard to the functional test issue:
You may want to consider a representative data sample (or several). That way you can have a known input, so you should be get predictable output.
Avoid unit testing altogether. Or start with Unit testing and once you reach integration tests "pass the relay baton" to integration testing. Delete the unit tests previously written or leave some for documentation.
Code coverage cannot be measured with integration testing(as it passes through different systems.)
Eventhough mainiting both unit tests and integration tests are ideal theoretically let us not risk it says the experienced.
Only an advanced unit tester can become integration tester. So only those who have become integration testers can advice against unit testing. We better listen to them because they are passionate about unit testing and are willing to sacrificing the 'fun' of writing unit tests for better 'safety net' for the team.
Integration tests are difficult to implement (seeding DB, infra recreation ,etc) and we simply wont have time to maintain both(unit and intgration tests)
Unit testing can still be good for library(.dll), framework development, complex calculations and also for product development companies. But how many of us work on these? These days for web develoment, everyone can work in end-to-end scenarious easily as there are frameworks already available. For this integration tests are best anyways.
Some helpful links:
"Why You Need to Stop Writing Unit Tests" https://hackernoon.com/why-you-need-to-stop-writing-unit-tests
"Write tests. Not too many. Mostly integration."
https://kentcdodds.com/blog/write-tests
"The No. 1 unit testing best practice: Stop doing it"
https://techbeacon.com/app-dev-testing/no-1-unit-testing-best-practice-stop-doing-it
"Unit test kills"
https://www.linkedin.com/pulse/before-you-feed-unit-test-beast-s-a-n-j-ay-mohan/?trackingId=5eIFIJGSBpnGuXMEz2PnwQ%3D%3D

Why using Integration tests instead of unit tests is a bad idea?

Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.

What are key points to explain Unit Testing

I want to introduce Unit Testing to some colleagues that have no or little experience with Unit Testing. I'll start with a presentation of about an hour to explain the concept and give lots of examples. I'll follow up with pair programming sessions and code reviews.
What are the key points that should be focussed on at the intrduction?
To keep it really short: Unit testing is about two things
a tool for verifying intentions
a necessary safety net for refactoring
Obviously, it is a lot more than that, but to me that pretty much the sums it up.
Unit tests test small things
Another thing to remember is that unit tests test small things, "units". So if your test runs against a resource like a live server or a database, most people call that a system or integration test. To unit test just the code that talks to a resource like that, people often use mock objects (often called mocks).
Unit tests should run fast and be run often
When unit tests test small things, the tests run fast. That's a good thing. Frequently running unit tests helps you catch problems soon after the occur. The ultimate in frequently running unit tests is having them automated as part of continuous integration.
Unit tests work best when coverage is high
People have different views as to whether 100% unit test coverage is desirable. I'm of the belief that high coverage is good, but that there's a point of diminishing return. As a very rough rule of thumb, I would be happy with a code base that had 85% coverage with good unit tests.
Unit tests aren't a substitute for other types of tests
As important as unit tests are, other types of testing, like integration tests, acceptance tests, and others can also be considered parts of a well-tested system.
Unit testing existing code poses special challenges
If you're looking to add unit tests to existing code, you may want to look at Working Effectively with Legacy Code by Michael Feathers. Code that wasn't designed with testing in mind may have characteristics that make testing difficult and Feathers writes about ways of carefully refactoring code to make it easier to test. And when you're familiar with certain patterns that make testing code difficult, you and your team can write code that tries to avoid/minimize those patterns.
You might get some inspiration here too https://stackoverflow.com/questions/581589/introducing-unit-testing-to-a-wary-team/581610#581610
Remember to point out that Unit Testing is not a silver bullet and shouldn't replace other forms of traditional testing (Functional Tests etc) but should be used in conjunction.
Unit testing works better in some areas than others, so the only way to have truly comprehensive testing is to combine it with other forms.
This seems to be one of the biggest criticisms I see of Unit Testing as a lot of people don't seem to 'get' that it shouldn't be replacing other forms of testing in totality.
Main points:
unit tests help both design (by expressing intent) and regression test (by never going away) code;
unit tests are for lazy programmers who don't want to debug their code again;
tests have no business of influencing or affecting business logic and functionality, but they do test it fully;
unit tests demand the same qualities as regular code: theory and strategy, organization, patterns, smells, refactoring;
Unit tests should be FAIR.
F Fast
A Can be easily Automated
I Can be run Independently
R Repeatable

What is the right balance between unit vs. functional testing in a typical web application?

Unit tests are cheaper to write and maintain, but they don't cover all scenarios. What is the right balance between them?
it is important to distinguish between the intent and scope of these two types of tests:
a unit test typically test a specific feature at the module/class level, e.g. create-X, update-Y, foo-the-bar, compact-the-whizbang, etc. One class may have multiple unit tests
a functional test, also called an 'acceptance test' typically tests a use-case scenario from the outermost interface through to the end of processing, e.g. from the user-interface to the database and back again, from the input process to the notification utility, etc.
these two types of tests are not interchangable, and are in general disjoint. So the notion of striking a 'balance' between them makes no sense. You either need them or you don't.
if you are referring to the ease of coding each type of test in your testing framework, that is a different question - but the use of the framework (say, NUnit vs. a user-bot) does not change the type/nature of the test.
the best "balance", in general, would be to unit-test for confidence and completeness, and functional-test for client acceptance
I agree with Steven Lowe that there is no trade-off between unit testing and functional testing, as they are used for very different purposes.
Unit tests are about method and type verification, and also regression testing. Functional tests are about functional, scenario, and feasibility testing. In my opinion, there is almost no overlap,
If it helps, here are my testing categories.
Developers start from the inside and work outwards, focusing on code:
Assertions - verify data flow and structures
Debugger - verify code flow and data
Unit testing - verify each function
Integration testing - verify sub-systems
System testing - verify functionality
Regression tests - verify defects stay fixed
Security tests - verify system can't be penetrated easily.
Testers start from the outside and work inwards, focusing on features:
Acceptance tests - verify end-user requirements
Scenario tests - verify real-world situations
Global tests - verify feasible inputs
Regression tests - verify defects stay fixed
Usability tests - verify that system is easy to use
Security tests - verify system can't be penetrated easily
Code coverage - testing untouched code
Compatibility - with previous releases
Looking for quirks and rough edges.
End-users work from the outside, and usually have little focus:
Acceptance tests - verify end-user requirements
Scenario tests - verify real-world situations
Usability tests - verify that system is easy to use
Looking for quirks and rough edges.
I like Brian Marick's quadrant on automated tests where the distinctions are business vs. technology facing and support programming vs. critique product.
With that framework the question of balance becomes, what do I need right now?
On the app I am working on at the moment, there are probably 10:1 unit to functional tests. The unit test work simply things like retrieving entities from DB, error handling tests for db/network connectivity etc. These things run quick - minutes or less and are run by devs daily.
functional tests while fewer tend to be kitchen sink approach - can user complete order etc. tend to cover the business domain end of things are run by business analsyts and operations - sadly for us, often by hand. These things take weeks to run and usually to finalize a release cycle.
My current project is at about 60% unit test coverage and all user stories have happy-day coverage of user stories in selenium tests, some have additional coverage.
We're continually discussing this: Is there really any point of pushing unit-test coverage of increasingly absurd scenarios for much higher coverage ?
The argument is that expanding selenium tests increases test coverage on things that have business value. Does the customer really care about unit test corner cases that may fail?
When you get good at selenium testing, the cost of writing new tests with business value decreases. For us they're just as simple as unit tests.
The run-time cost is another issue. We have a small cluster of boxes running these tests all of the time.
So we tends to favour web-tests more, probably because we've gotten good at writing them and they provide undeniable business value.
Originally I leaned heavily towards preferring unit tests over functional/acceptance tests due to the initial cost factor of acceptance tests. However, over time I have changed my philosophy and am now a strong proponent of choosing acceptance tests wherever possible and only using unit tests when acceptance tests can't meet my needs.
The basic rational behind choosing acceptance over unit tests is the same as the basic rational for SOLID code. Your implementation should be able to change drastically with refactoring etc, but all business cases - acceptance tests- should be able to remain un-changed and prove acceptable system behavior (tests pass). With unit tests there's often a natural strong coupling of test to implementation code. Even though it's test code, it's still code and strong coupling as-we-know should be avoided. By choosing acceptance tests you're often led down the spiral of success to create well-planned, consumable de-coupled api's letting your implementation change behind the scenes without forcing your tests to change. Also, your developer implementation thoughts are in-line with the business system-behavior thoughts. In the end I find that all of this is better for business and for coder satisfaction.
From a theory standpoint I often ask my-self if a piece of code can't be tested via an acceptance test why that piece of code should exist? IE - if it's not part of a valid business scenario, then does that code add value or is it currently and will it remain solely cost?
Additionally, if you comment/document your acceptance tests well, those comments/documents generally are the most current and accurate language of the system - which usually will let you avoid other less valuable documentation approaches. Unit tests don't lend themselves to that form of "business-term" communication.
Lastly, I haven't formed this view point from just my personal development, it's proven successful with a couple different project teams in my "very corporate" work environment.
JB
http://jb-brown.blogspot.com
Tests should run quickly and help localise problems.
Unit tests allow you to do this by only testing the module in question.
However functional/integration/acceptance tests can be made to run sufficiently quickly in most web scenarios.
I once read that a unit test is an "Executable requirement", which makes perfect sense to me. If your test is not focused on proving a business requirement then it really is of no use. If you have detailed requirements (and that is the acid test) then you will write a number of unit tests to exercise each possible scenario which will in turn ensure the integrity of your data structure algorithms and logic. If you are testing something that is not a requirement but you know must be true for the test to pass then it is more than likely that you are missing requirements.

Why are functional tests not enough? What do unit tests offer?

I just had a conversation with my lead developer who disagreed that unit tests are all that necessary or important. In his view, functional tests with a high enough code coverage should be enough since any inner refactorings (interface changes, etc.) will not lead to the tests being needed to be rewritten or looked over again.
I tried explaining but didn't get very far, and thought you guys could do better. ;-) So...
What are some good reasons to unit test code that functional tests don't offer? What dangers are there if all you have are functional tests?
Edit #1 Thanks for all the great answers. I wanted to add that by functional tests I don't mean only tests on the entire product, but rather also tests on modules within the product, just not on the low level of a unit test with mocking if necessary, etc. Note also that our functional tests are automatic, and are continuously running, but they just take longer than unit tests (which is one of the big advantages of unit tests).
I like the brick vs. house example. I guess what my lead developer is saying is testing the walls of the house is enough, you don't need to test the individual bricks... :-)
Off the top of my head
Unit tests are repeatable without effort. Write once, run thousands of times, no human effort required, and much faster feedback than you get from a functional test
Unit tests test small units, so immediately point to the correct "sector" in which the error occurs. Functional tests point out errors, but they can be caused by plenty of modules, even in co-operation.
I'd hardly call an interface change "an inner refactoring". Interface changes tend to break a lot of code, and (in my opinion) force a new test loop rather than none.
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are for devs to see where the code failed
functional tests are for the business to see if the code does what they asked for
unit tests are checking that you've manufactured your bricks correctly
functional tests are checking that the house meets the customer's needs.
They're different things, but the latter will be much easier, if the former has been carried out.
It can be a lot more difficult to find the source of problems if a functional test fails, because you're effectively testing the entire codebase every time. By contrast, unit tests compartmentalize the potential problem areas. If all the other unit tests succeed but this one, you have an assurance that the problem is in the code you're testing and not elsewhere.
Bugs should be caught as soon as possible in the development cycle - having bugs move from design to code, or code to test, or (hopefully not) test to production increases the cost and time required to fix it.
Our shop enforces unit testing for that reason alone (I'm sure there are other reasons but that's enough for us).
If you use a pure Extreme Programing / Agile Development methodology the Unit tests are always required as they are the requirements for development.
In pure XP/Agile one makes all requirements based on the tests which are going to be performed to the application
Functional tests - Generate functional requirements.
Unit tests - Generate functions or object requirements.
Other than that Unit testing can be used to keep a persistent track of function requirements.
i.e. If you need to change the working way of a function but the input fields and output keep untouched. Then unit testing is the best way to keep tracking of possible problems as you only need to run the tests.
In TDD/BDD, unit tests are necessary to write the program. The process goes
failing test -> code -> passing test -> refactor -> repeat
The article linked also mentions the benefits of TDD/BDD. In summary:
Comes very close to eliminating the use of a debugger (I only use it in tests now and very rarely for those)
Code can't stay messy for longer than a few minutes
Documentation examples for an API built-in
Forces loose coupling
The link also has a (silly) walk-through example of TDD/BDD, but it's in PowerPoint (ew), so here's an html version.
Assume for a second that you already have a thorough set of functional tests that check every possible use case available and you are considering adding unit tests. Since the functional tests will catch all possible bugs, the unit tests will not help catch bugs. There are however, some tradeoffs to using functional tests exclusively compared to a combination of unit tests, integration tests, and functional tests.
Unit tests run faster. If you've ever worked on a big project where the test suite takes hours to run, you can understand why fast tests are important.
In my experience, practically speaking, functional tests are more likely to be flaky. For example, sometimes the headless capybara-webkit browser just can't reach your test server for some reason, but you re-run it and it works fine.
Unit tests are easier to debug. Assuming that the unit test has caught a bug, it's easier and faster to pinpoint exactly where the problem is.
On the other hand, assuming you decide to just keep your functional tests and not add any unit tests
If you ever need to re-architect the entire system, you may not have to rewrite any tests. If you had unit tests, a lot of them will probably be deleted or rewritten.
If you ever need to re-architect the entire system, you won't have to worry about regressions. If you had relied on unit tests to cover corner cases, but you were forced to delete or rewrite those unit tests, your new unit tests are more likely to have mistakes in them than the old unit tests.
Once you already have the functional test environment set up and you have gotten over the learning curve, writing additional functional tests is often easier to write and often easier to write correctly than a combination of unit tests, integration tests, and functional tests.