Related
At my office we have a dispute regarding the necessity of unit tests in addition to integration tests for the classes that have the main responsibility of interacting with a filesystem (DB, etc).
The integration tests we have, are almost unit tests, as the tested object doesn't interact with other objects at all. The only reason, why we call the tests integration, is that the real filesystem is used in tests. And it is proposed to make the tested class use filesystem layer component, then mock it in tests (so we will call them unit tests), and check interaction with this component, rather than real filesystem results. Necessity of this change is what we discuss.
One point of view we have, is that unit tests are always required, because:
Writing unit tests makes code much better
Having unit tests, you don't need to care about real filesystem and side-effects of files, appearing at wrong locations
A developer can fully test results by making the tested class use filesystem mock and setting proper expectations for that mock
It is ok to tie the mock expectations to the specific internal algorithm of the tested class, because we do white-box testing with unit tests
Thus, the unit tests must always be written for such a tested class. And a filesystem layer component must always be used by that class for the purpose of testing.
Another point of view, is that unit tests are not needed for specific edge cases of classes that are devoted to filesystem interactions, because:
It is not possible to properly verify, that a tested class works, just by having simple mock instead of real filesystem (or its full emulation). Filesystem is such a complex component, that:
A tested class can work in many different ways in order to achieve successful result. The mock expectations cover just one-two possible scenarios, so a unit test erroneously shows failures for a class that properly implements good algorithm, which is different from the one expected.
A tested class can work in a way, detected by mock as a successful scenario, while the class still does not produce right result. This can be because of quite complex reasons in real filesystem. All these reasons are impossible to be covered just by a mock.
A unit test with mock and expectations is very fragile, because it is very tied to the tested class's internal algorithm. And the test erroneously fails upon even right changes of the algorithm.
The integration testing is a proper and full replacement for unit testing for a case, when the class has just 1-2 public methods, and only dependency is a filesystem. Integration testing gives same benefits as unit testing for this case - clear dependencies, more readable code, etc.
Thus the unit testing with filesystem mocking is not needed in our case. It is fragile and is not accurate for this particular case of classes.
So, to sum it up, the question is:
Is integration testing fully enough for an edge case of having not a complex class, which has main responsibility to work with a filesystem (DB, etc.)?
The only difference between integration and unit tests for this class, is that with unit tests the filesystem mock would be used (class would be fully isolated), while with integration tests the real filesystem is used.
I would appreciate, if you can add the references to classic books, or maybe articles / presentations of well-known industry people, so we can have a really strong ground to support the resulting conclusion.
The short answer here is yes, you could fully test a class with 'integration' tests. The better question, though, is should you do so?
I think you're getting too hung up on the difference in definitions between a 'unit test' (no outside dependencies) and an 'integration test' (has such dependencies). The goal with testing is to give you confidence that your code is working at all times, while keeping the associated costs of having that confidence down. So your question
Is integration testing fully enough for an edge case of having not a
complex class, which has main responsibility to work with a filesystem
(DB, etc.)?
is somewhat incomplete.
The most useful part of that distinction between 'unit' and 'integration' for our discussion is this: unit tests are easier and cheaper to write, maintain, and run.
To write a unit test, you just need to know the code. If a unit test fails, you know it's because of changes to the code. Writing an integration test requires setting up dependencies, eg creating files with specific contents, inserting rows in to a database, etc. If an integration test fails, it could be your code, or it could be your dependencies. For these reasons and others, integration tests are more complex, and therefore expensive, to create, maintain and run.
That increased expense should push the developer to separate classes encapsulating business logic from classes that handle interaction with outside systems, in an effort to minimize the number of integration tests required. The business logic can be tested with unit tests, which are cheaper.
Edit
It is possible that your class has complicated logic that itself is complicated because it has to handle complicated behavior in the underlying external dependency (ie, the file system in question). In that case, mocking the file system may be quite difficult in itself, and it may be cheaper/easier to just use a properly set up file system and write 'integration' tests.
The important point to keep in mind is what you're trying to achieve: confidence at a acceptable cost. If 'integration' tests are cheap enough, great. If you can get the same confidence cheaper using 'unit' tests, even better. The exact mix depends on the problem at hand.
It would be preferable to have a known state of the filesystem or DB for the tests. As an example, you do not want to have a test fail because it is trying to insert a record that already exists. This failure is not due to the code but a problem with the DB. Same thing can happen in the filesystem. However, you should write the best test that you are able to. If you can't easily mock the filesystem or whatever, then interact with it. Just realize that if the test fails it may not be a problem with the code.
An ugle test is better than no test. --The Way of Testivus
http://www.artima.com/weblogs/viewpost.jsp?thread=203994
Now even if you do have tests with mocks, that does not mean that you should not have QA or some sort of integration test to make sure that everything connects correctly. I view that unit tests are for verifying that the internals of the code works correctly and integration tests tell me that all the pieces work together.
I don't know what language you are using but the documentation for PHPUnit gives some ideas about testing the DB and filesystem.
http://www.phpunit.de/manual/current/en/database.html
http://www.phpunit.de/manual/current/en/test-doubles.html#test-doubles.stubbing-and-mocking-web-services
http://www.phpunit.de/manual/current/en/test-doubles.html#test-doubles.mocking-the-filesystem
A unit test with mock and expectations is very fragile, because it is
very tied to the tested class's internal algorithm. And the test
erroneously fails upon even right changes of the algorithm.
For testing with mocks, you should not be tying to the algorithm. All that you are testing for is the expected behavior of the class. Not how it goes about doing it.
I am new to unit testing, but tend to think that I believe in beautifully written code, and properly designed architectures.
My question is. Aren't unit tests focusing too much on dependencies between objects? What do you do when your unit test fails because a dependency your method used to call befor is no longer called (a design decision) or your method calls another method or a dependency (again a design decision) Do you redesign your tests? If that's the case, then unit testing helps very little to reduce couple and improve cohesion between components.
Maybe my opinion is too broad, but in general how do people treat dependencies in properly mannered unit tests. I guess that the best way would be to have no dependencies at all, and every method relied on the parameters that were given to it, but this is hardly the case in reality. In addition, faking every dependency method for every possible call is also a bit subjective and time wasting, because at a future point in time, the class under test may simply no longer need the dependency.
I would suggest that you look at Test Driven Development (TDD) as I believe this technique will help you with your design issues. By writing unit tests before writing the production code, you will need to think about how to make your production code testable. This is better then the test later approach, where you write the production code first and then try to shoe-horn tests around them.
To deal with dependencies, think about what dependencies are causing you problems.
External Dependencies
If your tests use an external resource, such as a file, then you are writing an integration test, not a unit test. I've written many tests that use an external file, and I simply created a copy of the file in my test project. This file copy will contain dummy data required for my tests.
If your test requires a database, then again your writing an integration test. Personally I create a local copy of the database on my PC and run my tests against it.
Object Dependencies
If you are worried about code dependencies (e.g. your test will fail if a private method's signature is changed) then you are testing at the wrong level of abstraction. By that I mean make sure that your tests are calling public API's and not private ones. To cement this point, use interfaces for your objects to ensure an expected contract for an object that implements it.
I would also recommend that you try using a mocking framework such as RhinoMocks, Moq or TypeMock
A mocking framework will help you remove the dependency on, for example, having a database available for your tests. I personally use TypeMock, it's not cheap but it's by far the most powerful tool out there.
If you are talking about Unit testing you have no dependencies, cause a unit test tests only a single class (Java, C++, Ruby, Python). What you are talking about sounds more like integration testing which is different. Furthermore if you have to much dependencies your coupling is to high which is not very good, but of course not always avoidable.
Unit tests shall test the behavior, not the implementation. That way, one can rely on the unit tests when changing the implementation, or when refactoring the code. Removing a dependency (via inlining the class for instance), does not break the test.
Testing the implementation leads to brittle tests, that gets in the way when refactoring.
Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.
I've used unit tests successfully for a while, but I'm beginning to think they're only useful for classes/methods that actually perform a fair amount of logic - parsers, doing math, complex business logic - all good candidates for testing, no question. I'm really struggling to figure out how to use testing for another class of objects: those which operate mostly via delegation.
Case in point: my current project coordinates a lot of databases and services. Most classes are just collections of service methods, and most methods perform some basic conditional logic, maybe a for-each loop, and then invoke other services.
With objects like this, mocks are really the only viable strategy for testing, so I've dutifully designed mocks for several of them. And I really, really don't like it, for the following reasons:
Using mocks to specify expectations for behavior makes things break whenever I change the class implementation, even if it's not the sort of change that ought to make a difference to a unit test. To my mind, unit tests ought to test functionality, not specify "the methods needs to do A, then B, then C, and nothing else, in that order." I like tests because I am free to change things with the confidence that I'll know if something breaks - but mocks just make it a pain in the ass to change anything.
Writing the mocks is often more work than writing the classes themselves, if the intended behavior is simple.
Because I'm using a completely different implementation of all the services and component objects in my test, in the end, all my tests really verify is the most basic skeleton of the behavior: that "if" and "for" statements still work. Boring. I'm not worried about those.
The core of my application is really how all the pieces work together, so I'm considering
ditching unit tests altogether (except for places where they're clearly appropriate) and moving to external integration tests instead - harder to set up, coverage of less possible cases, but actually exercise the system as it is mean to be run.
I'm not seeing any cases where using mocks is actually useful.
Thoughts?
If you can write integration tests that are fast and reliable, then I would say go for it.
Use mocks and/or stubs only where necessary to keep your tests that way.
Notice, though, that using mocks is not necessarily as painful as you described:
Mocking APIs let you use loose/non-strict mocks, which will allow all invocations from the unit under test to its collaborators. Therefore, you don't need to record all invocations, but only those which need to produce some required result for the test, such as a specific return value from a method call.
With a good mocking API, you will have to write little test code to specify mocking. In some cases you may get away with a single field declaration, or a single annotation applied to the test class.
You can use partial mocking so that only the necessary methods of a service/component class are actually mocked for a given test. And this can be done without specifying said methods in strings.
To my mind, unit tests ought to test
functionality, not specify "the
methods needs to do A, then B, then C,
and nothing else, in that order."
I agree. Behavior testing with mocks can lead to brittle tests, as you've found. State-based testing with stubs reduces that issue. Fowler weighs in on this in Mocks Aren't Stubs.
Writing the mocks is often more work
than writing the classes themselves
For mocks or stubs, consider using an isolation (mocking) framework.
in the end, all my tests really verify
is the most basic skeleton of the
behavior: that "if" and "for"
statements still work
Branches and loops are logic; I would recommend testing them. There's no need to test getters and setters, one-line pure delegation methods, and so forth, in my opinion.
Integration tests can be extremely valuable for a composite system such as yours. I would recommend them in addition to unit tests, rather than instead of them.
You'll definitely want to test the classes underlying your low-level or composing services; that's where you'll see the biggest bang for the buck.
EDIT: Fowler doesn't use the "classical" term the way I think of it (which likely means I'm wrong). When I talk about state-based testing, I mean injecting stubs into the class under test for any dependencies, acting on the class under test, then asserting against the class under test. In the pure case I would not verify anything on the stubs.
Writing Integration Tests is a viable option here, but should not replace Unit Tests. But since you stated your writing mocks yourself, I suggest using an Isolation Framework (aka Mocking Framework), which I am pretty sure of will be available for your environment too.
Being that you've posted several questions in one I'll answer them one by one.
How do I write useful unit tests for a mostly service-oriented app?
Do not rely on unit tests for a "mostly service-oriented app"! Yes I said that in a sentence. These types of apps are meant to do one thing: integrate services. It's therefore more pressing that you write integration tests instead of unit tests to very that the integration is working correctly.
I'm not seeing any cases where using mocks is actually useful.
Mocks can be extremely useful, but I wouldn't use them on controllers. Controllers should be covered by integration tests. Services can be covered by unit tests but it may be wise to have them as separate modules if the amount of testing slows down your project.
Thoughts?
For me, I tend to think about a few things:
What is my application doing?
How expensive would it be to perform system level / integration tests?
Can I split my application up into modules that can be tested separately?
In the scenario you've provided, I'd say your application is an integration of many services. Therefore, I'd lean heavily on integration tests over unit tests. I'd bet most of the Mocks you've written have been for http related classes etc.
I'm a bigger fan of integration / system level tests wherever possible for the following reasons:
In this day and age of "moving fast", re-factoring the designs of yesterday happens at an ever increasing rate. Integration tests aren't concerned about implementation details at all so this facilitates rapid change. Dynamic languages are in full swing making mocks even more dangerous / brittle. With a static lang, mocks are much safer because your tests won't compile if they're trying to stub out a non existent or misspelled method name.
The amount of code written in an integration test is usually 60% less than the amount of code written in a unit test to achieve the same level of coverage so development time is less. "Yes but it takes longer to run integration tests..." that's where you need to be pragmatic until it actually slows you down to run integration tests.
Integration tests catch more bugs. Mocking is often contrived and removes the developer from the realities of what their changes will do to the application as a whole. I've allowed way more bugs into production under the "safety net" of 100% unit test coverage than I would have with integration tests.
If integration testing is slow for my application then I haven't split it up into separate modules. This is often an indicator early on that I need to do some extracting into separation.
Integration tests do way more for you than reach code coverage, they're also an indicator of performance issues or network problems etc.
So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially.
A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up.
The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough.
Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?
Your question has been debated for years and can also be rephrased as "what is a unit"?
There is no law of unit testing that says you need to test each class in isolation. However, to be maintainable, your really want tests to only have to change when the behavior they test changes. Looked at it this way it is often reasonable to use concrete versions of close collaborators and fakes for more distant ones.
The one place I absolutely use fakes of one kind or another is to follow Michael Feather's Rules of Unit Testing.
I don't see value in integration tests that just links together fully unit testable internal classes.
It seems to me that the value in integration tests is where it touches the platform or external interfaces, i.e. contracts you cannot unit test.