Assume that I'm writing an application which uses Test Driven Development.
All the samples that I find are very small examples that try to explain how tests need to be written in TDD.
When you write a test in TDD, you write a very small piece of code whose purpose is to test a single piece of code, a single method, so a unit test.
After some time, a requirement from the client is received and you need to change your original code allowing it to accept much more arguments and splitting the method into multiple methods over multiple layers.
Let's say that logging is added when a failure occurs. What do I need to test then, the logging component separately, or chained together with the original method?
This means that the original unit test is in fact becoming an integration test as I'm testing multiple components together now.
Is this something that should be avoided, or how does one solve those kind of issues if needed?
Kind regards
TDD in the real world actually uses both unit tests and integration tests. Unit tests are seen in tutorials because it's easier to understand simple examples, but real applications need some integration tests. It's typical for the first test you write to be an integration test (see bdd).
However, integration tests are slow and hard to maintain (they touch more of the system than unit tests, so they change more frequently), so it's good to have only as many integration tests as needed and do as much of your testing with unit tests as is reasonable.
When requirements on a class cause it to become larger and you refactor the class into smaller classes, its unit tests are now integration tests. Address this by writing focused unit tests for the new classes and removing most of the old tests for the original class. It may be appropriate to leave behind one or a few of the old tests as integration tests. It also may be appropriate to rewrite some of the old tests to use test doubles (stubs, mocks, etc.) for what are now instances of other classes. Coincidentally, I recently wrote an answer about the mechanics of rewriting tests when you refactor a class out of another class.
In addition to the other answers, you could have a look at the extended TDD cycle as defined in the book Growing Object-Oriented Software Guided by Tests. They are using acceptance tests to drive the inner loop of writing unit tests; depending on the situation, however, I found that you can also use integration tests to do that.
So no need to avoid them. What matters in my experience are the granularity and number of tests (less integration-, more unit tests).
TDD or not, the idea of an unit test is that you isolate an unit of the application and verify its code flows in isolation. An unit is typically a class and you would be looking at atleast one unit test per code branch of a method. E.g. If classA.methodA() has 3 branches, you will have 3+ unit tests for that method.
A true unit test injects the mocked/stubbed dependencies into the component, invokes the method to be tested and verifies its behavior and/or object state. Unit tests in principle should encourage you to improve the design of your source code in terms of loose coupling, separation of concern, etc. (SOLID principles).
Further, code coverage is a good measure of the quality of your unit tests, but striving for 100% isn't advised. Also, writing unit tests for every application layer is overkill; you would want to target layers that contain business logic to achieve a good return-of-investment. Lastly, do not write unit tests without having a Continuous Integration pipeline, since they tend to become stale very quickly.
On the contrary, when you start verifying two or more units as one test, it becomes an integration test since your test result is influenced by the success or failure of each unit. These tend to require more effort to setup the environment, could be flaky due to external dependencies and could be slow based on volume of transactions. These are definitely useful and you should aim for a code coverage based on your budget constraints. Integration tests should also be part of the CI/CD pipeline, but could be run less often than unit tests.
Hope this helps.
There is no real fundamental difference between a unit test and an integration test.
A low level (unit) test of one of your classes will likely also exercise, and rely on, classes provided by your runtime environment or application framework. So your unit test could also be considered as an integration test of the combination of your code with the code of the runtime environment.
With no fundamental difference between them, there is no reason to be concerned if something once labelled "unit test" is now labelled "integration test".
This isn't a question but maybe a request for comments about general unit testing in grails.
I've been banging my head against writing unit tests and except for very, very simple use cases, I most always run into some snag. What I'm finding is anytime something needs to be mocked, like grailsApplication, or some other framework object, unit testing starts falling apart or you need to jump through so many hoops that it becomes counter-productive. Then, on top of this, the migration from 1.X to 2.X caused all sorts of unit/integration test refactoring, which in the long run made things easier, but still caused failures during migration.
My answer ... move all semi-complicated testing into integration tests and don't look back. It works when everything is spun up. It takes longer to run, but not longer than dealing with the unit testing headaches.
The latest, not the first, use case that caused my heartburn was trying to unit test a service that creates a domain object, which has a dependency on grailsApplication.config, and does something with said domain object. I tried most everything I found to fix it (except what actually works!), nothing worked, I move the unit test code to an integration test, and it passed on first run. Unit test complained that 'config' could not be called on a null object or something like that, meaning grailsApplication was not there.
I really don't see the need to write unit tests when integration tests work for everything, all the time.
Using grails 2.2.0.
Would not agree to that.
Unit tests are the building blocks of a perfectly written app following the concept of TDD. The rationale of having a unit test is to isolate a module from injections and try to test a it with all the dependencies provided by the test environment instead of the dependencies provided by the container.
I can understand the pain point here. You have to go through some boiler plate setup but that is the main point in having unit tests.
Integration tests provides everything on your plate (dependency injection on top) but this does not solve the purpose of testing a module in isolated conditions. I have been in the same path as yours upgrading bunch of projects from Grails 1.3.4 to Grails 2.2.0 and faced the same problem. But I took some time to understand the highly flexible mocking mechanisms provided by latest version of Grails. For example;
#Mock : You do not need to use mockDomain any more, the annotation will take care of mocking domain classes in Unit test cases. Not to forget build-test-data plugin makes it more convenient to mock
#TestFor: You can use #TestFor annotation for grails artefacts (controller, services) which will do some injections for you and you would end up following few conventions.
Not to forget the power of mixins.
You can find more about them here.
Now to answer your problem you faced while unit testing the service class. grailsApplication is readily available in unit test cases, you do no need to mock or try to get it from application context. Just use grailsApplication in the test class like:
//Unit test
void testSomething(){
assert "Hello" == grailsApplication.config.foo.bar.hello
}
//Config.groovy
foo.bar.hello=hello
Isn't that simple? Remember not to def grailsApplication in Unit Test class.
On the other hand, integration tests are a better approach if you are testing a service in particular, or a portion of service which uses more than on module.
We make tests to fail in order to pass. Writing a passing test case makes me uncomfortable and think that I am doing something wrong. :)
I both agree and disagree with the original asker's post. My response is, there are some things that unit tests are better for, but his current sticking point might not be one of them.
In my opinion, the best code has regions of complex behavior with extremely well defined input and output and regions of simple behavior with loosely defined input and output but will have no regions of loosely defined complex behavior. Obviously configuration and light business logic goes in the latter category and any "engine" code goes in the former category.
The original post references uses of configuration, which is by definition loosely defined, so it would be best if logic that is found near there is kept relatively simple. Any complex behavior near that can be abstracted out into general, parameterized behavior and this behavior can be unit tested. But for the functionality involving configuration, an integration test would probably be better.
I also ignore unit test for most part. It causes problem for me more than it worth. However, I wrote integration tests for almost all part. (though not all combinations, that's impossible.)
My main purpose of test is to prevent regression, mostly from code change in related part. Integration test suits me perfectly.
At my office we have a dispute regarding the necessity of unit tests in addition to integration tests for the classes that have the main responsibility of interacting with a filesystem (DB, etc).
The integration tests we have, are almost unit tests, as the tested object doesn't interact with other objects at all. The only reason, why we call the tests integration, is that the real filesystem is used in tests. And it is proposed to make the tested class use filesystem layer component, then mock it in tests (so we will call them unit tests), and check interaction with this component, rather than real filesystem results. Necessity of this change is what we discuss.
One point of view we have, is that unit tests are always required, because:
Writing unit tests makes code much better
Having unit tests, you don't need to care about real filesystem and side-effects of files, appearing at wrong locations
A developer can fully test results by making the tested class use filesystem mock and setting proper expectations for that mock
It is ok to tie the mock expectations to the specific internal algorithm of the tested class, because we do white-box testing with unit tests
Thus, the unit tests must always be written for such a tested class. And a filesystem layer component must always be used by that class for the purpose of testing.
Another point of view, is that unit tests are not needed for specific edge cases of classes that are devoted to filesystem interactions, because:
It is not possible to properly verify, that a tested class works, just by having simple mock instead of real filesystem (or its full emulation). Filesystem is such a complex component, that:
A tested class can work in many different ways in order to achieve successful result. The mock expectations cover just one-two possible scenarios, so a unit test erroneously shows failures for a class that properly implements good algorithm, which is different from the one expected.
A tested class can work in a way, detected by mock as a successful scenario, while the class still does not produce right result. This can be because of quite complex reasons in real filesystem. All these reasons are impossible to be covered just by a mock.
A unit test with mock and expectations is very fragile, because it is very tied to the tested class's internal algorithm. And the test erroneously fails upon even right changes of the algorithm.
The integration testing is a proper and full replacement for unit testing for a case, when the class has just 1-2 public methods, and only dependency is a filesystem. Integration testing gives same benefits as unit testing for this case - clear dependencies, more readable code, etc.
Thus the unit testing with filesystem mocking is not needed in our case. It is fragile and is not accurate for this particular case of classes.
So, to sum it up, the question is:
Is integration testing fully enough for an edge case of having not a complex class, which has main responsibility to work with a filesystem (DB, etc.)?
The only difference between integration and unit tests for this class, is that with unit tests the filesystem mock would be used (class would be fully isolated), while with integration tests the real filesystem is used.
I would appreciate, if you can add the references to classic books, or maybe articles / presentations of well-known industry people, so we can have a really strong ground to support the resulting conclusion.
The short answer here is yes, you could fully test a class with 'integration' tests. The better question, though, is should you do so?
I think you're getting too hung up on the difference in definitions between a 'unit test' (no outside dependencies) and an 'integration test' (has such dependencies). The goal with testing is to give you confidence that your code is working at all times, while keeping the associated costs of having that confidence down. So your question
Is integration testing fully enough for an edge case of having not a
complex class, which has main responsibility to work with a filesystem
(DB, etc.)?
is somewhat incomplete.
The most useful part of that distinction between 'unit' and 'integration' for our discussion is this: unit tests are easier and cheaper to write, maintain, and run.
To write a unit test, you just need to know the code. If a unit test fails, you know it's because of changes to the code. Writing an integration test requires setting up dependencies, eg creating files with specific contents, inserting rows in to a database, etc. If an integration test fails, it could be your code, or it could be your dependencies. For these reasons and others, integration tests are more complex, and therefore expensive, to create, maintain and run.
That increased expense should push the developer to separate classes encapsulating business logic from classes that handle interaction with outside systems, in an effort to minimize the number of integration tests required. The business logic can be tested with unit tests, which are cheaper.
Edit
It is possible that your class has complicated logic that itself is complicated because it has to handle complicated behavior in the underlying external dependency (ie, the file system in question). In that case, mocking the file system may be quite difficult in itself, and it may be cheaper/easier to just use a properly set up file system and write 'integration' tests.
The important point to keep in mind is what you're trying to achieve: confidence at a acceptable cost. If 'integration' tests are cheap enough, great. If you can get the same confidence cheaper using 'unit' tests, even better. The exact mix depends on the problem at hand.
It would be preferable to have a known state of the filesystem or DB for the tests. As an example, you do not want to have a test fail because it is trying to insert a record that already exists. This failure is not due to the code but a problem with the DB. Same thing can happen in the filesystem. However, you should write the best test that you are able to. If you can't easily mock the filesystem or whatever, then interact with it. Just realize that if the test fails it may not be a problem with the code.
An ugle test is better than no test. --The Way of Testivus
http://www.artima.com/weblogs/viewpost.jsp?thread=203994
Now even if you do have tests with mocks, that does not mean that you should not have QA or some sort of integration test to make sure that everything connects correctly. I view that unit tests are for verifying that the internals of the code works correctly and integration tests tell me that all the pieces work together.
I don't know what language you are using but the documentation for PHPUnit gives some ideas about testing the DB and filesystem.
http://www.phpunit.de/manual/current/en/database.html
http://www.phpunit.de/manual/current/en/test-doubles.html#test-doubles.stubbing-and-mocking-web-services
http://www.phpunit.de/manual/current/en/test-doubles.html#test-doubles.mocking-the-filesystem
A unit test with mock and expectations is very fragile, because it is
very tied to the tested class's internal algorithm. And the test
erroneously fails upon even right changes of the algorithm.
For testing with mocks, you should not be tying to the algorithm. All that you are testing for is the expected behavior of the class. Not how it goes about doing it.
Like anything else understanding the words makes it much easier to learn the language. Can anyone chime in with all the words used in unit testing with their definitions (ex. Mock, Fixture, etc. )
This looks like a great page: http://xunitpatterns.com/Glossary.html
It includes:
SUT
synchronous test
task
TDD
test automater
test case
test code
test condition
test context
test database
test debt
test driver
test driving
test error
test failure
test fixture
test fixture
test fixture
test maintainer
test package
test reader
test result
test run
test smell
test stripper
test success
test suite
test-driven bug fixing
test-driven development
test-first development
test-last development
test-specific equality
test
In relation to mocking etc this might be useful:
This table and its references might be more useful:
http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html
Perhaps these articles will be more helpful:
Wikipedia:
In computer programming, unit testing is a software design and development method where the programmer gains confidence that individual units of source code are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/child class.
Unit testing can be done by something as simple as stepping through code in a debugger; modern applications include the use of a test framework such as xUnit.
Ideally, each test case is independent from the others; Double objects like stubs, mock or fake objects1 as well as test harnesses can be used to assist testing a module in isolation. Unit testing is typically done by software developers to ensure that the code other developers have written meets software requirements and behaves as the developer intended.
MSDN:
The primary goal of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Each unit is tested separately before integrating them into modules to test the interfaces between modules. Unit testing has proven its value in that a large percentage of defects are identified during its use.
Extreme Rules:
Unit tests enable collective code ownership. When you create unit tests you guard your functionality from being accidentally harmed. Requiring all code to pass all unit tests before it can be released ensures all functionality always works. Code ownership is not required if all classes are guarded by unit tests.
I've also found a glossary of testing terms, but it doesn't define Mock or Fixture, but there's an option to add new ones. Once the question is answered to your satisfaction maybe that could become the canonical source.
Mock,
n.
A type of turtle, used primarily in soup.
A code construct used in Unit Testing, named after #1. A mock looks like the real thing to the code being tested, however any attempts to interact with it result only in mournful songs.
v.
To construct a mock for use in testing.
I know the so-called textbook definition of unit tests and integration tests. What I am curious about is when it is time to write unit tests... I will write them to cover as many sets of classes as possible.
For example, if I have a Word class, I will write some unit tests for the Word class. Then, I begin writing my Sentence class, and when it needs to interact with the Word class, I will often write my unit tests such that they test both Sentence and Word... at least in the places where they interact.
Have these tests essentially become integration tests because they now test the integration of these 2 classes, or is it just a unit test that spans 2 classes?
In general, because of this uncertain line, I will rarely actually write integration tests... or is my using the finished product to see if all the pieces work properly the actual integration tests, even though they are manual and rarely repeated beyond the scope of each individual feature?
Am I misunderstanding integration tests, or is there really just very little difference between integration and unit tests?
The key difference, to me, is that integration tests reveal if a feature is working or is broken, since they stress the code in a scenario close to reality. They invoke one or more software methods or features and test if they act as expected.
On the opposite, a Unit test testing a single method relies on the (often wrong) assumption that the rest of the software is correctly working, because it explicitly mocks every dependency.
Hence, when a unit test for a method implementing some feature is green, it does not mean the feature is working.
Say you have a method somewhere like this:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
Log.TrackTheFactYouDidYourJob();
return someResults;
}
DoSomething is very important to your customer: it's a feature, the only thing that matters. That's why you usually write a Cucumber specification asserting it: you wish to verify and communicate the feature is working or not.
Feature: To be able to do something
In order to do something
As someone
I want the system to do this thing
Scenario: A sample one
Given this situation
When I do something
Then what I get is what I was expecting for
No doubt: if the test passes, you can assert you are delivering a working feature. This is what you can call Business Value.
If you want to write a unit test for DoSomething you should pretend (using some mocks) that the rest of the classes and methods are working (that is: that, all dependencies the method is using are correctly working) and assert your method is working.
In practice, you do something like:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
FakeAlwaysWorkingLog.TrackTheFactYouDidYourJob(); // Using a mock Log
return someResults;
}
You can do this with Dependency Injection, or some Factory Method or any Mock Framework or just extending the class under test.
Suppose there's a bug in Log.DoSomething().
Fortunately, the Gherkin spec will find it and your end-to-end tests will fail.
The feature won't work, because Log is broken, not because [Do your job with someInput] is not doing its job. And, by the way, [Do your job with someInput] is the sole responsibility for that method.
Also, suppose Log is used in 100 other features, in 100 other methods of 100 other classes.
Yep, 100 features will fail. But, fortunately, 100 end-to-end tests are failing as well and revealing the problem. And, yes: they are telling the truth.
It's very useful information: I know I have a broken product. It's also very confusing information: it tells me nothing about where the problem is. It communicates me the symptom, not the root cause.
Yet, DoSomething's unit test is green, because it's using a fake Log, built to never break. And, yes: it's clearly lying. It's communicating a broken feature is working. How can it be useful?
(If DoSomething()'s unit test fails, be sure: [Do your job with someInput] has some bugs.)
Suppose this is a system with a broken class:
A single bug will break several features, and several integration tests will fail.
On the other hand, the same bug will break just one unit test.
Now, compare the two scenarios.
The same bug will break just one unit test.
All your features using the broken Log are red
All your unit tests are green, only the unit test for Log is red
Actually, unit tests for all modules using a broken feature are green because, by using mocks, they removed dependencies. In other words, they run in an ideal, completely fictional world. And this is the only way to isolate bugs and seek them. Unit testing means mocking. If you aren't mocking, you aren't unit testing.
The difference
Integration tests tell what's not working. But they are of no use in guessing where the problem could be.
Unit tests are the sole tests that tell you where exactly the bug is. To draw this information, they must run the method in a mocked environment, where all other dependencies are supposed to correctly work.
That's why I think that your sentence "Or is it just a unit test that spans 2 classes" is somehow displaced. A unit test should never span 2 classes.
This reply is basically a summary of what I wrote here: Unit tests lie, that's why I love them.
When I write unit tests I limit the scope of the code being tested to the class I am currently writing by mocking dependencies. If I am writing a Sentence class, and Sentence has a dependency on Word, I will use a mock Word. By mocking Word I can focus only on its interface and test the various behaviors of my Sentence class as it interacts with Word's interface. This way I am only testing the behavior and implementation of Sentence and not at the same time testing the implementation of Word.
Once I've written the unit tests to ensure Sentence behaves correctly when it interacts with Word based on Word's interface, then I write the integration test to make sure that my assumptions about the interactions were correct. For this I supply the actual objects and write a test that exercises a feature that will end up using both Sentence and Word.
My 10 bits :D
I was always told that Unit Tests is the testing of an individual component - which should be exercised to its fullest. Now, this tends to have many levels, since most components are made of smaller parts. For me, a unit is a functional part of the system. So it has to provide something of value (i.e. not a method for string parsing, but a HtmlSanitizer perhaps).
Integration Tests is the next step up, its taking one or more components and making sure they work together as they should.. You are then above the intricacies of worry about how the components work individually, but when you enter html into your HtmlEditControl , it somehow magically knows wether its valid or not..
Its a real movable line though.. I'd rather focus more on getting the damn code to work full stop ^_^
In unit test you test every part isolated:
in integration test you test many modules of your system:
and this what happens when you only use unit tests (generally both windows are working, unfortunately not together):
Sources:
source1
source2
Unit tests use mocks
The thing you're talking about are integration tests that actually test the whole integration of your system. But when you do unit testing you should actually test each unit separately. Everything else should be mocked. So in your case of Sentence class, if it uses Word class, then your Word class should be mocked. This way, you'll only test your Sentence class functionality.
I think when you start thinking about integration tests, you are speaking more of a cross between physical layers rather than logical layers.
For example, if your tests concern itself with generating content, it's a unit test: if your test concerns itself with just writing to disk, it's still a unit test, but once you test for both I/O AND the content of the file, then you have yourself an integration test. When you test the output of a function within a service, it's a unit-test, but once you make a service call and see if the function result is the same, then that's an integration test.
Technically you cannot unit test just-one-class anyway. What if your class is composed with several other classes? Does that automatically make it an integration test? I don't think so.
using Single responsibility design, its black and white. More than 1 responsibility, its an integration test.
By the duck test (looks, quacks, waddles, its a duck), its just a unit test with more than 1 newed object in it.
When you get into mvc and testing it, controller tests are always integration, because the controller contains both a model unit and a view unit. Testing logic in that model, I would call a unit test.
In my opinion the answer is "Why does it matter?"
Is it because unit tests are something you do and integration tests are something you don't? Or vice versa? Of course not, you should try to do both.
Is it because unit tests need to be Fast, Isolated, Repeatable, Self-Validating and Timely and integration tests should not? Of course not, all tests should be these.
It is because you use mocks in unit tests but you don't use them in integration tests? Of course not. This would imply that if I have a useful integration test I am not allowed to add a mock for some part, fear I would have to rename my test to "unit test" or hand it over to another programmer to work on.
Is it because unit tests test one unit and integration tests test a number of units? Of course not. Of what practical importance is that? The theoretical discussion on the scope of tests breaks down in practice anyway because the term "unit" is entirely context dependent. At the class level, a unit might be a method. At an assembly level, a unit might be a class, and at the service level, a unit might be a component.
And even classes use other classes, so which is the unit?
It is of no importance.
Testing is important, F.I.R.S.T is important, splitting hairs about definitions is a waste of time which only confuses newcomers to testing.
The nature of your tests
A unit test of module X is a test that expects (and checks for) problems only in module X.
An integration test of many modules is a test that expects problems that arise from the cooperation between the modules so that these problems would be difficult to find using unit tests alone.
Think of the nature of your tests in the following terms:
Risk reduction: That's what tests are for. Only a combination of unit tests and integration tests can give you full risk reduction, because on the one hand unit tests can inherently not test the proper interaction between modules and on the other hand integration tests can exercise the functionality of a non-trivial module only to a small degree.
Test writing effort: Integration tests can save effort because you may then not need to write stubs/fakes/mocks. But unit tests can save effort, too, when implementing (and maintaining!) those stubs/fakes/mocks happens to be easier than configuring the test setup without them.
Test execution delay: Integration tests involving heavyweight operations (such as access to external systems like DBs or remote servers) tend to be slow(er). This means unit tests can be executed far more frequently, which reduces debugging effort if anything fails, because you have a better idea what you have changed in the meantime. This becomes particularly important if you use test-driven development (TDD).
Debugging effort: If an integration test fails, but none of the unit tests does, this can be very inconvenient, because there is so much code involved that may contain the problem. This is not a big problem if you have previously changed only a few lines -- but as integration tests run slowly, you perhaps did not run them in such short intervals...
Remember that an integration test may still stub/fake/mock away some of its dependencies.
This provides plenty of middle ground between unit tests and system tests (the most comprehensive integration tests, testing all of the system).
Pragmatic approach to using both
So a pragmatic approach would be: Flexibly rely on integration tests as much as you sensibly can and use unit tests where this would be too risky or inconvenient.
This manner of thinking may be more useful than some dogmatic discrimination of unit tests and integration tests.
Unit Testing is a method of testing that verifies the individual units of source code are working properly.
Integration Testing is the phase of software testing in which individual software modules are combined and tested as a group.
Wikipedia defines a unit as the smallest testable part of an application, which in Java/C# is a method. But in your example of Word and Sentence class I would probably just write the tests for sentence since I would likely find it overkill to use a mock word class in order to test the sentence class. So sentence would be my unit and word is an implementation detail of that unit.
I think I would still call a couple of interacting classes a unit test provided that the unit tests for class1 are testing class1's features, and the unit tests for class2 are testing its features, and also that they are not hitting the database.
I call a test an integration test when it runs through most of my stack and even hits the database.
I really like this question, because TDD discussion sometimes feels a bit too purist to me, and it's good for me to see some concrete examples.
I do the same - I call them all unit tests, but at some point I have a "unit test" that covers so much I often rename it to "..IntegrationTest" - just a name change only, nothing else changes.
I think there is a continuation from "atomic tests" (testing one tiny class, or a method) to unit tests (class level) and integration tests - and then functional test (which are normally covering a lot more stuff from the top down) - there doesn't seem to be a clean cut off.
If your test sets up data, and perhaps loads a database/file etc, then perhaps its more of an integration test (integration tests I find use less mocks and more real classes, but that doesn't mean you can't mock out some of the system).
Integration tests: Database persistence is tested.
Unit tests: Database access is mocked. Code methods are tested.
Unit testing is testing against a unit of work or a block of code if you like. Usually performed by a single developer.
Integration testing refers to the test that is performed, preferably on an integration server, when a developer commits their code to a source control repository. Integration testing might be performed by utilities such as Cruise Control.
So you do your unit testing to validate that the unit of work you have built is working and then the integration test validates that whatever you have added to the repository didn't break something else.
Simple Explanation with Analogies
This answer will focus purely on examples.
Integration Tests
Integration tests check if everything is working together.
Unit Tests
They tell you whether one specific thing is working.
Examples
Consider a car:
Integration test for a car: e.g. does the car drive to Pondicherry and back? If so, the car as whole is working. If it fails, you won't really where where: radiator, transmission, engine, or carburettor?
Unit test for a car: Is the engine is working? This tests just the engine; nothing else. If this test fails, then you can be confident that there is a bug in the engine....This ties in closely with the concept of "fakes". You might need some keys in order to start the engine - except, you don't want to go to the hassle of actually an ignition (with a lock)...instead, you would hotwire the car to start it....in other words you would use a "fake" key.
Similarly, in unit testing, you would use "fakes" in order to make the engine work a particular way. And then you could simply test: "is it running".
I call unit tests those tests that white box test a class. Any dependencies that class requires is replaced with fake ones (mocks).
Integration tests are those tests where multiple classes and their interactions are tested at the same time. Only some dependencies in these cases are faked/mocked.
I wouldn't call Controller's integration tests unless one of their dependencies is a real one (i.e. not faked) (e.g. IFormsAuthentication).
Separating the two types of tests is useful for testing the system at different levels. Also, integration tests tend to be long lived, and unit tests are supposed to be quick. The execution speed distinction means they're executed differently. In our dev processes, unit tests are run at check-in (which is fine cos they're super quick), and integration tests are run once/twice per day. I try and run integration tests as often as possible, but usually hitting the database/writing to files/making rpc's/etc slows.
That raises another important point, unit tests should avoid hitting IO (e.g. disk, network, db). Otherwise they slow down alot. It takes a bit of effort to design these IO dependencies out - i can't admit I've been faithful to the "unit tests must be fast" rule, but if you are, the benefits on a much larger system become apparent very quickly.
A little bit academic this question, isn't it? ;-)
My point of view:
For me an integration test is the test of the whole part, not if two parts out of ten are going together.
Our integration test shows, if the master build (containing 40 projects) will succeed.
For the projects we have tons of unit tests.
The most important thing concerning unit tests for me is, that one unit test must not be dependent on another unit test. So for me both test you describe above are unit tests, if they are independent. For integration tests this need not to be important.
Have these tests essentially become integration tests because they now test the integration of these 2 classes? Or is it just a unit test that spans 2 classes?
I think Yes and Yes. Your unit test that spans 2 classes became an integration test.
You could avoid it by testing Sentence class with mock implementation - MockWord class, which is important when those parts of system are large enough to be implemented by different developers. In that case Word is unit tested alone, Sentence is unit tested with help of MockWord, and then Sentence is integration-tested with Word.
Exaple of real difference can be following
1) Array of 1,000,000 elements is easily unit tested and works fine.
2) BubbleSort is easily unit tested on mock array of 10 elements and also works fine
3) Integration testing shows that something is not so fine.
If these parts are developed by single person, most likely problem will be found while unit testing BubbleSoft just because developer already has real array and he does not need mock implementation.
In addition, it's important to remember that both unit tests and integration tests can be automated and written using, for example, JUnit.
In JUnit integration tests, one can use the org.junit.Assume class to test the availability of environment elements (e.g., database connection) or other conditions.
I get asked this a lot in interviews. Until now I'd ramble on pretentiously about my expertise and pontificate about component and acceptance testing.
For years I'd understood only integration and unit tests. I could, but didn't always bother to, write unit tests as a solo developer honing my skills.
Unit tests
That is a crucial difference. Unit tests are easy to implement and execute, requiring, ideally, no dependencies. That is what mocks are for. It is often easier to not mock everything, particularly where you gain coverage of other functions you wrote. Easier, maybe, but that isn't the idea of unit testing.
I'll reiterate, unit tests are meant to be easy to run and small. Their failure provides immediate insight into where a bug has been introduced.
Here is the hierarchy of tests, from cheap and plentiful at the bottom to slow, expensive, and few, at the top:
Several more layers can be conceptualised, but were omitted for clarity.
Integration tests
With integration tests you would consider bringing in serious external dependencies, such as VMs, virtual networks and appliances. Possibly you could use actual modems, routers, and firewalls where the expense was justified.
These wouldn't be run locally but on a build server. A mixture of local Jenkins and cloud based CI providers fulfil this need.
Other test terminology
That is my understanding that has served me for several years in industry. We could talk about component tests, and get a definition, but if the definition isn't in common circulation then it loses value.
Acceptance tests were what we would call business unit or customer requirements. These would lead the direction of everything and sit at the top of the pyramid (picture a dollar sign).
E2E, or end to end testing was used synonymously with integration tests, but I noticed online it is placed above. I guess it could have more relevance to acceptance tests the integration tests, which would tend to be more detailed with less interest from stakeholders (though immense interest internally in the department).
If you're a TDD purist, you write the tests before you write production code. Of course, the tests won't compile, so you first make the tests compile, then make the tests pass.
You can do this with unit tests, but you can't with integration or acceptance tests. If you tried with an integration test, nothing would ever compile until you've finished!