Assume that I'm writing an application which uses Test Driven Development.
All the samples that I find are very small examples that try to explain how tests need to be written in TDD.
When you write a test in TDD, you write a very small piece of code whose purpose is to test a single piece of code, a single method, so a unit test.
After some time, a requirement from the client is received and you need to change your original code allowing it to accept much more arguments and splitting the method into multiple methods over multiple layers.
Let's say that logging is added when a failure occurs. What do I need to test then, the logging component separately, or chained together with the original method?
This means that the original unit test is in fact becoming an integration test as I'm testing multiple components together now.
Is this something that should be avoided, or how does one solve those kind of issues if needed?
Kind regards
TDD in the real world actually uses both unit tests and integration tests. Unit tests are seen in tutorials because it's easier to understand simple examples, but real applications need some integration tests. It's typical for the first test you write to be an integration test (see bdd).
However, integration tests are slow and hard to maintain (they touch more of the system than unit tests, so they change more frequently), so it's good to have only as many integration tests as needed and do as much of your testing with unit tests as is reasonable.
When requirements on a class cause it to become larger and you refactor the class into smaller classes, its unit tests are now integration tests. Address this by writing focused unit tests for the new classes and removing most of the old tests for the original class. It may be appropriate to leave behind one or a few of the old tests as integration tests. It also may be appropriate to rewrite some of the old tests to use test doubles (stubs, mocks, etc.) for what are now instances of other classes. Coincidentally, I recently wrote an answer about the mechanics of rewriting tests when you refactor a class out of another class.
In addition to the other answers, you could have a look at the extended TDD cycle as defined in the book Growing Object-Oriented Software Guided by Tests. They are using acceptance tests to drive the inner loop of writing unit tests; depending on the situation, however, I found that you can also use integration tests to do that.
So no need to avoid them. What matters in my experience are the granularity and number of tests (less integration-, more unit tests).
TDD or not, the idea of an unit test is that you isolate an unit of the application and verify its code flows in isolation. An unit is typically a class and you would be looking at atleast one unit test per code branch of a method. E.g. If classA.methodA() has 3 branches, you will have 3+ unit tests for that method.
A true unit test injects the mocked/stubbed dependencies into the component, invokes the method to be tested and verifies its behavior and/or object state. Unit tests in principle should encourage you to improve the design of your source code in terms of loose coupling, separation of concern, etc. (SOLID principles).
Further, code coverage is a good measure of the quality of your unit tests, but striving for 100% isn't advised. Also, writing unit tests for every application layer is overkill; you would want to target layers that contain business logic to achieve a good return-of-investment. Lastly, do not write unit tests without having a Continuous Integration pipeline, since they tend to become stale very quickly.
On the contrary, when you start verifying two or more units as one test, it becomes an integration test since your test result is influenced by the success or failure of each unit. These tend to require more effort to setup the environment, could be flaky due to external dependencies and could be slow based on volume of transactions. These are definitely useful and you should aim for a code coverage based on your budget constraints. Integration tests should also be part of the CI/CD pipeline, but could be run less often than unit tests.
Hope this helps.
There is no real fundamental difference between a unit test and an integration test.
A low level (unit) test of one of your classes will likely also exercise, and rely on, classes provided by your runtime environment or application framework. So your unit test could also be considered as an integration test of the combination of your code with the code of the runtime environment.
With no fundamental difference between them, there is no reason to be concerned if something once labelled "unit test" is now labelled "integration test".
Fake objects (and all test doubles in general) are used to assist in unit tests, but as they contain code, question arises - should they be tested?
On one hand, they are code after all so it should be tested, on the other hand, it is not production code, so testing it seems less important.
I try to write tests for fake objects, as it often doesn't take much time and can save a lot more time debugging, but those tests still look weird to me.
By fake object I mean Gerard Meszaros definition:
A Fake Object is a much simpler and lighter weight implementation of
the functionality provided by the DOC without the side effects we
choose to do without.
Even if the fakes are not production code you should test it because your test depends of correct execution of this fakes, if the fakes don't work right then the tests either. I think that any function thats contains some logic in a proyect should be tested, even if its not production code.
You should test if the system satisfies the requirements. It's impossible to test exactly everything.
The idea of the fake objects is just to provide a class instance with static data so you can check what will happen with this particular value.
It's good idea to have integration tests, where you can check how the system is working with all of your classes. If there is a problem, your unit tests may be more concrete about where exactly is the problem.
If you want to test your tests after all - this presentations may give you a better idea - http://www.infoq.com/presentations/kill-better-test
A component can have dependency on other components, So to unit test a component you are faking the other components on which the "component under test" is dependent on, which is absolutely fine. Because unit test says to test a component in isolation, not in integration with other collaborating components.
Secondly, you should have production version of the collaborating component , which you are faking for the unit test of the current component. In that case, you should also have unit test for the production version of each collaborating component, which you have faked in this case.
So the basic rule is if you properly unit test each component independently or in isolation, then while they will integrated they should work. But if you want to test a components together that will be called integration test.
BDD has been touted as "TDD done right".
However TDD is widely used with unit tests, rather than end-to-end integration tests.
Which kind of test is most appropriate for BDD?
Should we write only integration tests?
Should we also write unit tests?
If yes, should there be multiple unit-tests per scenario?
And what a unit test covers multiple scenarios? Is there a way to structure these tests when using a testing framework such as MSpec?
Which kind of test (integration tests, unit tests) is most appropriate for BDD?
I would use both in two nested loops as described in Behavior-Driven Development with SpecFlow and WatiN
* writing a failing integration tests
* writing a failing unit test as part of the solution of the integration test
* making the unittest pass
* refactor
* writing the next failing unit test as part of the integration test
* unitl the integration test passes
* writing the next failing integration tests
The reason you often see acceptance/integration tests used in the BDD cycle is because many of it's practitioners put a heavy emphasis on mocking and outside-in development. As such they tend need to include both end-to-end integration/acceptance tests as well as unit tests to ensure the total behavior of the system.
When fake objects are being used as collaborators instead of real objects in a unit test, it is quite easy to verify that the isolated unit under test behaves in the proper fashion (that is that it modifies it's state properly, and sends the proper messages). However it can not verify anything more than that in isolation the individual object behaves as expected. As such, an end-to-end acceptance test is then necessary verify that all the objects in the system when used together provide the promised value to the end user.
So basically within this approach unit tests and acceptance tests play the following roles:
Unit Tests - Objects in isolation behave in the proper manner
Acceptance/Integration Tests - All Objects together provide the promised value of the system.
Although I myself am a large proponent of this style of development, it is somewhat independent from the general BDD ideas. In my opinion acceptance/integration tests are only necessary when your isolating your system under test in your unit tests by using mocks and stubs.
I'll give the answer that seems most likely to me, but I might be off-track here.
TDD was originally implemented with unit-tests, which clarify requirements for individual components of the software by calling them and expecting certain results.
BDD simply applies the same concept to the system as a whole. So for BDD, you write integration tests, which call the system from the outside-in, by simulating mouse clicks, etc, or whatever interface the end-user uses to access the system.
BDD seems to actually be a kind of black-box/UAT testing, since it's concerned with the behaviour of the system as a whole, not with the means by which the behaviour is implemented.
So it seems advisable to write unit tests and integration tests, but at the very least, integration tests, so that if we don't have time to verify each component, we can at least verify the system as a whole.
The basic idea is to write the tests before the code, so for any feature, use the test appropriate to examine the feature. If the feature you are adding is "clicking on the report bug link should send an email", then an integration test seems appropriate (presumably with some unit tests for components of that feature). If the feature you are working on is "Average usage calculation should omit the highest and lowest value", a unit test is probably more appropriate. The important thing is to accurately be able to tell when what you are building is done, and later to be sure changes didn't break it. The rest is just bookkeeping.
I've used unit tests successfully for a while, but I'm beginning to think they're only useful for classes/methods that actually perform a fair amount of logic - parsers, doing math, complex business logic - all good candidates for testing, no question. I'm really struggling to figure out how to use testing for another class of objects: those which operate mostly via delegation.
Case in point: my current project coordinates a lot of databases and services. Most classes are just collections of service methods, and most methods perform some basic conditional logic, maybe a for-each loop, and then invoke other services.
With objects like this, mocks are really the only viable strategy for testing, so I've dutifully designed mocks for several of them. And I really, really don't like it, for the following reasons:
Using mocks to specify expectations for behavior makes things break whenever I change the class implementation, even if it's not the sort of change that ought to make a difference to a unit test. To my mind, unit tests ought to test functionality, not specify "the methods needs to do A, then B, then C, and nothing else, in that order." I like tests because I am free to change things with the confidence that I'll know if something breaks - but mocks just make it a pain in the ass to change anything.
Writing the mocks is often more work than writing the classes themselves, if the intended behavior is simple.
Because I'm using a completely different implementation of all the services and component objects in my test, in the end, all my tests really verify is the most basic skeleton of the behavior: that "if" and "for" statements still work. Boring. I'm not worried about those.
The core of my application is really how all the pieces work together, so I'm considering
ditching unit tests altogether (except for places where they're clearly appropriate) and moving to external integration tests instead - harder to set up, coverage of less possible cases, but actually exercise the system as it is mean to be run.
I'm not seeing any cases where using mocks is actually useful.
Thoughts?
If you can write integration tests that are fast and reliable, then I would say go for it.
Use mocks and/or stubs only where necessary to keep your tests that way.
Notice, though, that using mocks is not necessarily as painful as you described:
Mocking APIs let you use loose/non-strict mocks, which will allow all invocations from the unit under test to its collaborators. Therefore, you don't need to record all invocations, but only those which need to produce some required result for the test, such as a specific return value from a method call.
With a good mocking API, you will have to write little test code to specify mocking. In some cases you may get away with a single field declaration, or a single annotation applied to the test class.
You can use partial mocking so that only the necessary methods of a service/component class are actually mocked for a given test. And this can be done without specifying said methods in strings.
To my mind, unit tests ought to test
functionality, not specify "the
methods needs to do A, then B, then C,
and nothing else, in that order."
I agree. Behavior testing with mocks can lead to brittle tests, as you've found. State-based testing with stubs reduces that issue. Fowler weighs in on this in Mocks Aren't Stubs.
Writing the mocks is often more work
than writing the classes themselves
For mocks or stubs, consider using an isolation (mocking) framework.
in the end, all my tests really verify
is the most basic skeleton of the
behavior: that "if" and "for"
statements still work
Branches and loops are logic; I would recommend testing them. There's no need to test getters and setters, one-line pure delegation methods, and so forth, in my opinion.
Integration tests can be extremely valuable for a composite system such as yours. I would recommend them in addition to unit tests, rather than instead of them.
You'll definitely want to test the classes underlying your low-level or composing services; that's where you'll see the biggest bang for the buck.
EDIT: Fowler doesn't use the "classical" term the way I think of it (which likely means I'm wrong). When I talk about state-based testing, I mean injecting stubs into the class under test for any dependencies, acting on the class under test, then asserting against the class under test. In the pure case I would not verify anything on the stubs.
Writing Integration Tests is a viable option here, but should not replace Unit Tests. But since you stated your writing mocks yourself, I suggest using an Isolation Framework (aka Mocking Framework), which I am pretty sure of will be available for your environment too.
Being that you've posted several questions in one I'll answer them one by one.
How do I write useful unit tests for a mostly service-oriented app?
Do not rely on unit tests for a "mostly service-oriented app"! Yes I said that in a sentence. These types of apps are meant to do one thing: integrate services. It's therefore more pressing that you write integration tests instead of unit tests to very that the integration is working correctly.
I'm not seeing any cases where using mocks is actually useful.
Mocks can be extremely useful, but I wouldn't use them on controllers. Controllers should be covered by integration tests. Services can be covered by unit tests but it may be wise to have them as separate modules if the amount of testing slows down your project.
Thoughts?
For me, I tend to think about a few things:
What is my application doing?
How expensive would it be to perform system level / integration tests?
Can I split my application up into modules that can be tested separately?
In the scenario you've provided, I'd say your application is an integration of many services. Therefore, I'd lean heavily on integration tests over unit tests. I'd bet most of the Mocks you've written have been for http related classes etc.
I'm a bigger fan of integration / system level tests wherever possible for the following reasons:
In this day and age of "moving fast", re-factoring the designs of yesterday happens at an ever increasing rate. Integration tests aren't concerned about implementation details at all so this facilitates rapid change. Dynamic languages are in full swing making mocks even more dangerous / brittle. With a static lang, mocks are much safer because your tests won't compile if they're trying to stub out a non existent or misspelled method name.
The amount of code written in an integration test is usually 60% less than the amount of code written in a unit test to achieve the same level of coverage so development time is less. "Yes but it takes longer to run integration tests..." that's where you need to be pragmatic until it actually slows you down to run integration tests.
Integration tests catch more bugs. Mocking is often contrived and removes the developer from the realities of what their changes will do to the application as a whole. I've allowed way more bugs into production under the "safety net" of 100% unit test coverage than I would have with integration tests.
If integration testing is slow for my application then I haven't split it up into separate modules. This is often an indicator early on that I need to do some extracting into separation.
Integration tests do way more for you than reach code coverage, they're also an indicator of performance issues or network problems etc.
I know the so-called textbook definition of unit tests and integration tests. What I am curious about is when it is time to write unit tests... I will write them to cover as many sets of classes as possible.
For example, if I have a Word class, I will write some unit tests for the Word class. Then, I begin writing my Sentence class, and when it needs to interact with the Word class, I will often write my unit tests such that they test both Sentence and Word... at least in the places where they interact.
Have these tests essentially become integration tests because they now test the integration of these 2 classes, or is it just a unit test that spans 2 classes?
In general, because of this uncertain line, I will rarely actually write integration tests... or is my using the finished product to see if all the pieces work properly the actual integration tests, even though they are manual and rarely repeated beyond the scope of each individual feature?
Am I misunderstanding integration tests, or is there really just very little difference between integration and unit tests?
The key difference, to me, is that integration tests reveal if a feature is working or is broken, since they stress the code in a scenario close to reality. They invoke one or more software methods or features and test if they act as expected.
On the opposite, a Unit test testing a single method relies on the (often wrong) assumption that the rest of the software is correctly working, because it explicitly mocks every dependency.
Hence, when a unit test for a method implementing some feature is green, it does not mean the feature is working.
Say you have a method somewhere like this:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
Log.TrackTheFactYouDidYourJob();
return someResults;
}
DoSomething is very important to your customer: it's a feature, the only thing that matters. That's why you usually write a Cucumber specification asserting it: you wish to verify and communicate the feature is working or not.
Feature: To be able to do something
In order to do something
As someone
I want the system to do this thing
Scenario: A sample one
Given this situation
When I do something
Then what I get is what I was expecting for
No doubt: if the test passes, you can assert you are delivering a working feature. This is what you can call Business Value.
If you want to write a unit test for DoSomething you should pretend (using some mocks) that the rest of the classes and methods are working (that is: that, all dependencies the method is using are correctly working) and assert your method is working.
In practice, you do something like:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
FakeAlwaysWorkingLog.TrackTheFactYouDidYourJob(); // Using a mock Log
return someResults;
}
You can do this with Dependency Injection, or some Factory Method or any Mock Framework or just extending the class under test.
Suppose there's a bug in Log.DoSomething().
Fortunately, the Gherkin spec will find it and your end-to-end tests will fail.
The feature won't work, because Log is broken, not because [Do your job with someInput] is not doing its job. And, by the way, [Do your job with someInput] is the sole responsibility for that method.
Also, suppose Log is used in 100 other features, in 100 other methods of 100 other classes.
Yep, 100 features will fail. But, fortunately, 100 end-to-end tests are failing as well and revealing the problem. And, yes: they are telling the truth.
It's very useful information: I know I have a broken product. It's also very confusing information: it tells me nothing about where the problem is. It communicates me the symptom, not the root cause.
Yet, DoSomething's unit test is green, because it's using a fake Log, built to never break. And, yes: it's clearly lying. It's communicating a broken feature is working. How can it be useful?
(If DoSomething()'s unit test fails, be sure: [Do your job with someInput] has some bugs.)
Suppose this is a system with a broken class:
A single bug will break several features, and several integration tests will fail.
On the other hand, the same bug will break just one unit test.
Now, compare the two scenarios.
The same bug will break just one unit test.
All your features using the broken Log are red
All your unit tests are green, only the unit test for Log is red
Actually, unit tests for all modules using a broken feature are green because, by using mocks, they removed dependencies. In other words, they run in an ideal, completely fictional world. And this is the only way to isolate bugs and seek them. Unit testing means mocking. If you aren't mocking, you aren't unit testing.
The difference
Integration tests tell what's not working. But they are of no use in guessing where the problem could be.
Unit tests are the sole tests that tell you where exactly the bug is. To draw this information, they must run the method in a mocked environment, where all other dependencies are supposed to correctly work.
That's why I think that your sentence "Or is it just a unit test that spans 2 classes" is somehow displaced. A unit test should never span 2 classes.
This reply is basically a summary of what I wrote here: Unit tests lie, that's why I love them.
When I write unit tests I limit the scope of the code being tested to the class I am currently writing by mocking dependencies. If I am writing a Sentence class, and Sentence has a dependency on Word, I will use a mock Word. By mocking Word I can focus only on its interface and test the various behaviors of my Sentence class as it interacts with Word's interface. This way I am only testing the behavior and implementation of Sentence and not at the same time testing the implementation of Word.
Once I've written the unit tests to ensure Sentence behaves correctly when it interacts with Word based on Word's interface, then I write the integration test to make sure that my assumptions about the interactions were correct. For this I supply the actual objects and write a test that exercises a feature that will end up using both Sentence and Word.
My 10 bits :D
I was always told that Unit Tests is the testing of an individual component - which should be exercised to its fullest. Now, this tends to have many levels, since most components are made of smaller parts. For me, a unit is a functional part of the system. So it has to provide something of value (i.e. not a method for string parsing, but a HtmlSanitizer perhaps).
Integration Tests is the next step up, its taking one or more components and making sure they work together as they should.. You are then above the intricacies of worry about how the components work individually, but when you enter html into your HtmlEditControl , it somehow magically knows wether its valid or not..
Its a real movable line though.. I'd rather focus more on getting the damn code to work full stop ^_^
In unit test you test every part isolated:
in integration test you test many modules of your system:
and this what happens when you only use unit tests (generally both windows are working, unfortunately not together):
Sources:
source1
source2
Unit tests use mocks
The thing you're talking about are integration tests that actually test the whole integration of your system. But when you do unit testing you should actually test each unit separately. Everything else should be mocked. So in your case of Sentence class, if it uses Word class, then your Word class should be mocked. This way, you'll only test your Sentence class functionality.
I think when you start thinking about integration tests, you are speaking more of a cross between physical layers rather than logical layers.
For example, if your tests concern itself with generating content, it's a unit test: if your test concerns itself with just writing to disk, it's still a unit test, but once you test for both I/O AND the content of the file, then you have yourself an integration test. When you test the output of a function within a service, it's a unit-test, but once you make a service call and see if the function result is the same, then that's an integration test.
Technically you cannot unit test just-one-class anyway. What if your class is composed with several other classes? Does that automatically make it an integration test? I don't think so.
using Single responsibility design, its black and white. More than 1 responsibility, its an integration test.
By the duck test (looks, quacks, waddles, its a duck), its just a unit test with more than 1 newed object in it.
When you get into mvc and testing it, controller tests are always integration, because the controller contains both a model unit and a view unit. Testing logic in that model, I would call a unit test.
In my opinion the answer is "Why does it matter?"
Is it because unit tests are something you do and integration tests are something you don't? Or vice versa? Of course not, you should try to do both.
Is it because unit tests need to be Fast, Isolated, Repeatable, Self-Validating and Timely and integration tests should not? Of course not, all tests should be these.
It is because you use mocks in unit tests but you don't use them in integration tests? Of course not. This would imply that if I have a useful integration test I am not allowed to add a mock for some part, fear I would have to rename my test to "unit test" or hand it over to another programmer to work on.
Is it because unit tests test one unit and integration tests test a number of units? Of course not. Of what practical importance is that? The theoretical discussion on the scope of tests breaks down in practice anyway because the term "unit" is entirely context dependent. At the class level, a unit might be a method. At an assembly level, a unit might be a class, and at the service level, a unit might be a component.
And even classes use other classes, so which is the unit?
It is of no importance.
Testing is important, F.I.R.S.T is important, splitting hairs about definitions is a waste of time which only confuses newcomers to testing.
The nature of your tests
A unit test of module X is a test that expects (and checks for) problems only in module X.
An integration test of many modules is a test that expects problems that arise from the cooperation between the modules so that these problems would be difficult to find using unit tests alone.
Think of the nature of your tests in the following terms:
Risk reduction: That's what tests are for. Only a combination of unit tests and integration tests can give you full risk reduction, because on the one hand unit tests can inherently not test the proper interaction between modules and on the other hand integration tests can exercise the functionality of a non-trivial module only to a small degree.
Test writing effort: Integration tests can save effort because you may then not need to write stubs/fakes/mocks. But unit tests can save effort, too, when implementing (and maintaining!) those stubs/fakes/mocks happens to be easier than configuring the test setup without them.
Test execution delay: Integration tests involving heavyweight operations (such as access to external systems like DBs or remote servers) tend to be slow(er). This means unit tests can be executed far more frequently, which reduces debugging effort if anything fails, because you have a better idea what you have changed in the meantime. This becomes particularly important if you use test-driven development (TDD).
Debugging effort: If an integration test fails, but none of the unit tests does, this can be very inconvenient, because there is so much code involved that may contain the problem. This is not a big problem if you have previously changed only a few lines -- but as integration tests run slowly, you perhaps did not run them in such short intervals...
Remember that an integration test may still stub/fake/mock away some of its dependencies.
This provides plenty of middle ground between unit tests and system tests (the most comprehensive integration tests, testing all of the system).
Pragmatic approach to using both
So a pragmatic approach would be: Flexibly rely on integration tests as much as you sensibly can and use unit tests where this would be too risky or inconvenient.
This manner of thinking may be more useful than some dogmatic discrimination of unit tests and integration tests.
Unit Testing is a method of testing that verifies the individual units of source code are working properly.
Integration Testing is the phase of software testing in which individual software modules are combined and tested as a group.
Wikipedia defines a unit as the smallest testable part of an application, which in Java/C# is a method. But in your example of Word and Sentence class I would probably just write the tests for sentence since I would likely find it overkill to use a mock word class in order to test the sentence class. So sentence would be my unit and word is an implementation detail of that unit.
I think I would still call a couple of interacting classes a unit test provided that the unit tests for class1 are testing class1's features, and the unit tests for class2 are testing its features, and also that they are not hitting the database.
I call a test an integration test when it runs through most of my stack and even hits the database.
I really like this question, because TDD discussion sometimes feels a bit too purist to me, and it's good for me to see some concrete examples.
I do the same - I call them all unit tests, but at some point I have a "unit test" that covers so much I often rename it to "..IntegrationTest" - just a name change only, nothing else changes.
I think there is a continuation from "atomic tests" (testing one tiny class, or a method) to unit tests (class level) and integration tests - and then functional test (which are normally covering a lot more stuff from the top down) - there doesn't seem to be a clean cut off.
If your test sets up data, and perhaps loads a database/file etc, then perhaps its more of an integration test (integration tests I find use less mocks and more real classes, but that doesn't mean you can't mock out some of the system).
Integration tests: Database persistence is tested.
Unit tests: Database access is mocked. Code methods are tested.
Unit testing is testing against a unit of work or a block of code if you like. Usually performed by a single developer.
Integration testing refers to the test that is performed, preferably on an integration server, when a developer commits their code to a source control repository. Integration testing might be performed by utilities such as Cruise Control.
So you do your unit testing to validate that the unit of work you have built is working and then the integration test validates that whatever you have added to the repository didn't break something else.
Simple Explanation with Analogies
This answer will focus purely on examples.
Integration Tests
Integration tests check if everything is working together.
Unit Tests
They tell you whether one specific thing is working.
Examples
Consider a car:
Integration test for a car: e.g. does the car drive to Pondicherry and back? If so, the car as whole is working. If it fails, you won't really where where: radiator, transmission, engine, or carburettor?
Unit test for a car: Is the engine is working? This tests just the engine; nothing else. If this test fails, then you can be confident that there is a bug in the engine....This ties in closely with the concept of "fakes". You might need some keys in order to start the engine - except, you don't want to go to the hassle of actually an ignition (with a lock)...instead, you would hotwire the car to start it....in other words you would use a "fake" key.
Similarly, in unit testing, you would use "fakes" in order to make the engine work a particular way. And then you could simply test: "is it running".
I call unit tests those tests that white box test a class. Any dependencies that class requires is replaced with fake ones (mocks).
Integration tests are those tests where multiple classes and their interactions are tested at the same time. Only some dependencies in these cases are faked/mocked.
I wouldn't call Controller's integration tests unless one of their dependencies is a real one (i.e. not faked) (e.g. IFormsAuthentication).
Separating the two types of tests is useful for testing the system at different levels. Also, integration tests tend to be long lived, and unit tests are supposed to be quick. The execution speed distinction means they're executed differently. In our dev processes, unit tests are run at check-in (which is fine cos they're super quick), and integration tests are run once/twice per day. I try and run integration tests as often as possible, but usually hitting the database/writing to files/making rpc's/etc slows.
That raises another important point, unit tests should avoid hitting IO (e.g. disk, network, db). Otherwise they slow down alot. It takes a bit of effort to design these IO dependencies out - i can't admit I've been faithful to the "unit tests must be fast" rule, but if you are, the benefits on a much larger system become apparent very quickly.
A little bit academic this question, isn't it? ;-)
My point of view:
For me an integration test is the test of the whole part, not if two parts out of ten are going together.
Our integration test shows, if the master build (containing 40 projects) will succeed.
For the projects we have tons of unit tests.
The most important thing concerning unit tests for me is, that one unit test must not be dependent on another unit test. So for me both test you describe above are unit tests, if they are independent. For integration tests this need not to be important.
Have these tests essentially become integration tests because they now test the integration of these 2 classes? Or is it just a unit test that spans 2 classes?
I think Yes and Yes. Your unit test that spans 2 classes became an integration test.
You could avoid it by testing Sentence class with mock implementation - MockWord class, which is important when those parts of system are large enough to be implemented by different developers. In that case Word is unit tested alone, Sentence is unit tested with help of MockWord, and then Sentence is integration-tested with Word.
Exaple of real difference can be following
1) Array of 1,000,000 elements is easily unit tested and works fine.
2) BubbleSort is easily unit tested on mock array of 10 elements and also works fine
3) Integration testing shows that something is not so fine.
If these parts are developed by single person, most likely problem will be found while unit testing BubbleSoft just because developer already has real array and he does not need mock implementation.
In addition, it's important to remember that both unit tests and integration tests can be automated and written using, for example, JUnit.
In JUnit integration tests, one can use the org.junit.Assume class to test the availability of environment elements (e.g., database connection) or other conditions.
I get asked this a lot in interviews. Until now I'd ramble on pretentiously about my expertise and pontificate about component and acceptance testing.
For years I'd understood only integration and unit tests. I could, but didn't always bother to, write unit tests as a solo developer honing my skills.
Unit tests
That is a crucial difference. Unit tests are easy to implement and execute, requiring, ideally, no dependencies. That is what mocks are for. It is often easier to not mock everything, particularly where you gain coverage of other functions you wrote. Easier, maybe, but that isn't the idea of unit testing.
I'll reiterate, unit tests are meant to be easy to run and small. Their failure provides immediate insight into where a bug has been introduced.
Here is the hierarchy of tests, from cheap and plentiful at the bottom to slow, expensive, and few, at the top:
Several more layers can be conceptualised, but were omitted for clarity.
Integration tests
With integration tests you would consider bringing in serious external dependencies, such as VMs, virtual networks and appliances. Possibly you could use actual modems, routers, and firewalls where the expense was justified.
These wouldn't be run locally but on a build server. A mixture of local Jenkins and cloud based CI providers fulfil this need.
Other test terminology
That is my understanding that has served me for several years in industry. We could talk about component tests, and get a definition, but if the definition isn't in common circulation then it loses value.
Acceptance tests were what we would call business unit or customer requirements. These would lead the direction of everything and sit at the top of the pyramid (picture a dollar sign).
E2E, or end to end testing was used synonymously with integration tests, but I noticed online it is placed above. I guess it could have more relevance to acceptance tests the integration tests, which would tend to be more detailed with less interest from stakeholders (though immense interest internally in the department).
If you're a TDD purist, you write the tests before you write production code. Of course, the tests won't compile, so you first make the tests compile, then make the tests pass.
You can do this with unit tests, but you can't with integration or acceptance tests. If you tried with an integration test, nothing would ever compile until you've finished!