Integration tests, unit tests and code coverage - unit-testing

I was reviewing some code for a colleague and came across a test in the unit test class that looked this:
// setup
Foo f = ...
FooToBarConverter ftb = ...
Bar b = ftb.Convert(f); // It is easier to create a Bar by converting it from a Foo than making one 'from scratch'
// test
systemUnderTest.DoSomething(bar);
// assert
Assert.IsTrue(...)
Clearly this is an integration test as it is testing the FooToBarConverter as well as the system under test as it is the only test that covers the DoSomething() method. I suggested moving this test to an integration test solution, however this reduces the code coverage of the unit tests. We are aiming for 100% unit test code coverage (and yes - I know that 100% coverage is a means to an end not the end itself, and 100% covered code is not necessarily 100% correct code).
Is there a reason for creating unit tests to bring the coverage back up if we move the integration test out?
Or are we aiming for the wrong thing with 100% unit test coverage? Should we be aiming for 100% coverage with the combination all our tests (or even aiming for 100% at all)?
Thank you.
EDIT/UPDATE:
This is not a question about how to unit test the system under test properly (I know the reasons that this is not a unit test, and I know how to properly convert it into a unit test), nor is it a question on coverage on FooToBarConverter. I want opinions on code coverage on the system under test: are integration tests on the system under test sufficient? or should there also be unit tests?

I think the answer here is "it depends".
If you have full unit test coverage on the FooToBarConverter class then probably you are OK just with the integration test of systemUnderTest because you can say with confidence that the real FooToBarConverter behaves as expected in this context and therefore does not incorrectly influence the result of the test.
On the other hand, it's unclear specifically what this test is checking for - are you examining the behaviour of systemUnderTest when given a valid FooToBarConverter, or some other expected side effect within systemUnderTest to which FooToBarConverter is a purely coincidental actor? (i.e are you sure that this isn't an indirect test of bar?)
Now personally I would recommend that you also do a proper, "pure" unit test (using a mock or stub of FooToBarConverter) for systemUnderTest because
it will make regressions easier to manage; suppose that in the future some change to FooToBarConverter makes its unit tests fail - they will quite possibly also therefore make this integration test fail. That could be confusing for someone looking at failed tests and not knowing that the integration test failure can be ignored and that only the FooToBarConverter tests need to be fixed. It's a small thing, I know, but it might save 5 important minutes some day :)
How do you test the negative cases (behaviour of systemUnderTest when given a broken/invalid/null FooToBarConverter)? Since you'll probably have to write unit tests with stubs/mocks for these kind of cases anyway, you might as well have a unit test for the good case in the same project/test class as well, it's much clearer - otherwise you have to aggregate code coverage across both unit test and integration test projects to verify that systemUnderTest is fully covered...
Also, don't worry about 100% code coverage; it's nice to have but in practice it's rare to see it. I don't mean this as a sop to good design practices either; the simple reality is that no design is 100% perfect and therefore it's to be expected that there are times when you just don't have the time/resource/will to refactor your classes to allow every single dependency to be injected, or to be able to use interfaces for every inter-seam interaction, etc.
Hope that helps.

Related

How is unit testing related to code coverage?

I know about code coverage being a metric but it is often mentioned in the artciles about unit testing. But when designing the unit tests, I'm trying to write tests for my business logic and do not care much about coverage. What is the relation then?
The thinking goes as follows:
If the code executed when you run your unit tests covers all the code in the class you test (system under test, SUT), you obviously tested all relevant code.
So, a high code coverage of the SUT is a good thing.
But it also can be misleading. Having a 100% code coverage doesn't mean that you actually tested all your business logic. So, concentrating on testing all your business logic is actually the better approach.
If you tested all your business logic, you will have 100% code coverage - or some code in your business logic that is not needed there.
Still, you can use the code coverage to check if you actually have tested all your business logic.
So, to sum up:
If you don't have 100% code coverage of your SUT, it strongly suggest that you haven't tested your complete business logic
BUT: 100% code coverage doesn't ensure that you tested all of your logic
Literally a "unit test" tests a single unit, or in other words a single component. Thus, unit tests do not necessarily cover business requirements, but rather ensure that every piece of a component does what it is meant for. Code coverage measures the percentage of code that has been tested and thus validated. Every piece of code that has not been tested could probably contain defects so a high code coverage result is desired.
Of course unit tests, or rather the frameworks used to implement unit tests can also be used to test more than one component at once. These tests are more like integration tests, though on a very low level.
The relation between code coverage and integration (or business-logic testing) is that if you have a code coverage of 100% you know that every component does what it is supposed to do. But if you want to ensure that the application does what it is supposed to do, a high code coverage is not nearly enough: For this additional integration test (that test more than one component at a time) are needed.
Unit Testing with code coverage on can be a very powerful tool.
For example. When testing a method you can see if you have hit all the logic paths. If not more testing is required. If it is impossible to hit a section of code, you know you can remove it without any side effects.

What is unit testing, and does it require code being written?

I've joined a new team, and I've had a problem understanding how they are doing unit tests. When I asked where the unit tests are written, they explained they don't do their unit tests that way.
They explained that what they're calling unit tests is when they actually check the code they wrote locally, and that all of the points are being connected. To me, this is integration testing and just testing your code locally.
I was under the impression that unit tests are code written to verify behavior in a small section of a code. For example, you may write a unit test to make sure it returns the right value, and make the appropriate calls to the database. use a framework like NUnit or MbUnit to help you out in your assertions.
Unit testing to me is supposed to be fast and quick. To me, you want these so you can automate it, and have a huge suite of tests for your application to make sure that it behaves AS YOU EXPECT.
Can someone provide clarification in my or their misunderstandings?
I have worked places that did testing that way and called it unit testing. It reminded me of a quote attributed to Abe Lincoln:
Lincoln: How many legs does a dog have?
Other Guy: 4.
Lincoln: What if we called the tail a leg?
Other Guy: Well, then it would have 5.
Lincoln: No, the answer is still 4. Calling a tail a leg doesn't make it so.
They explained that what they're calling unit tests is when they
actually check the code they wrote locally, and that all of the points
are being connected.
That is not a unit test. That is a code review. Code reviews are good, but without actual unit tests things will break.
Unit tests involve writing code. Specifically, a unit test operate on one unit, which is just a class or component of your software.
If a class under test depends on another class, and you test both classes together, you have an integration test. Integration tests are good. Depending on the language/framework you might use the same testing framework (e.g. junit for java) for both unit and integration tests. If you have a dependency but mock or stub that dependency, then you have a pure unit test.
Unit testing to me is supposed to be fast and quick. To me, you want
these so you can automate it, and have a huge suite of tests for your
application to make sure that it behaves AS YOU EXPECT.
That is essentially correct. How 'fast and quick' developing unit tests is depends on the complexity of what is being tested and the skill of the developer writing the test. You definitely want to build up a suite of tests over time, so you know when something breaks as a codebase becomes more complex. That is how testing makes your codebase more maintainable, by telling you what ceases to function as you make changes.
Your team-mates are not doing unit testing. They are doing "fly by the seat of your pants" development.
Your assumptions are correct.
Doing a project without unit-tests (as they do, don't be fooled) might seem nice for the first few weeks: less code to write, less architecture to think about, less problems to worry about. And you can see the code is working correctly, right?
But as soon as someone (someone else, or even the original coder) comes back to an existing piece of code to modify it, add feature, or simply understand how it worked and what it exactly did, things will become a lot more problematic. And before you realize it, you'll spend your nights browsing through log files and debugging what seemed like a small feature just because it needs to integrate with other code that nobody knows exactly how it works. ANd you'll hate your job.
If it's not worth testing it (with actual unit-tests), then it's not worth writing the code in the first place. Everyone who tried coding without and with unit tests know that. Please, please, make them change their mind. Every time a piece of untested code is checked in somewhere, a puppy dies horribly.
Also, I should say, it's a lot (A LOT) harder to add tests later to a project that was done without testing in mind, than to build the test and production code side-to-side from the very start. Testing not only help you make sure your code works fine, it improves your code quality by forcing you to make good decisions (i.e. coding on interfaces, loose coupling, inversion of control, etc.)
"Unit testing" != "unit tests".
Writing unit tests is one specific method of performing unit testing. It is a very good one, and if your unit tests are written well, it can give you good value over a long time. But what they're doing is indeed unit testing. It's just the kind of unit testing that doesn't help you at all the next time you need to carve on the same code. And that's wasteful.
To add my two cents, yes, that is indeed not Unit testing. IMHO, the main features of unit tests are that it should be fast, automated and isolated. You can using a mocking framework such as RhinoMocks to isolate external dependencies.
Unit tests also have to be very simple and short. Ideally no more than a screen length. It is also one of the few places in software engineering where copy and pasting code might be a better solution than creating highly reusable and highly abstract functions. The reason simplicity is given such a high priority is to avoid the "Who watches the Watchers" problem. You really don't want to be in a situation where you have complex bugs in your unit tests, because they themselves aren't being tested. Here you are relying on the extreme simplicity and tiny size of the tests to avoid bugs.
The names of the unit tests also should be very descriptive, again following the simplicity and self documenting paradigm. I should be able to read the name of the test method and know exactly what it is doing. A quick glance at the code should show me exactly what functionality is being tested and if any external dependencies are being mocked.
The descriptive test names also make you think about the application as a whole. If I look at the entire test run, ideally just by looking at the names of all the tests that were run, I should have a fairly good idea of what the application does.

Best practices for testing other tests?

Starting to write unit tests (and other kinds), I realize that some tests (acceptance tests and the like) can quickly grow into quite complex software, and felt that maybe some unit tests (or at least shorter tests than the original one) to verify the longer test might be in place.
"Testing tests" might sound silly, but I wanted to know if there are people practicing this and if there is any known best practice for "Testing tests"?
The tests test the production code, and the production code should test the tests. ;-) That is to say - What you mainly want to test about a test is that it properly tests the code it's intended to test. The principal way to do this is to make the code wrong and see that the tests catch it.
There are tools for doing this automatically and measuring test coverage by seeing what mutations you can do to the code that are not caught by a test. See Mutation Testing.
If your tests are complex, you should be refactoring them and extracting some of the complexity out into test helpers. With enough of these, you might end up building something resembling a test framework, and that would deserve tests. Indeed, JUnit, RSpec, Cucumber, FitNesse, etc. are all test frameworks and have suites of tests to test the framework.
You should try to keep the tests themselves simple enough that you trust they work. But verifying by mutation testing might be worthwhile, especially as it also gives you a measure of your coverage.
Big tests have little tests upon their backs to verfy'em.
And Little tests have lesser tests, and so ad infinitum.
Loosely based on Jonathan Swift's little poem
Don't test tests. Simply assume they're correct and move on.
If the lower tests fail, of course the higher tests will fail. That's understandable. (I would advise making a note that "This test relies on test X, Y and Z passing." if it isn't intuitive by a quick overview of the higher test.)But you don't need a specific test to make sure another test works before you move on - the test suite should be running ALL tests.
I think if you have to test your tests you're doing something wrong. In my opinion tests should be really easy and simple. However, what you can do is measure how good your tests are. Maybe you want to know how much code coverage your tests have..
but that's not testing the tests as you thought, is it?
In any case, it's probably a waste of time.. :)
You've touched on a few things here. Let me try to cover them all ...
If you've written an in-house test framework, then it should have some tests around it. If you're not using the off-the-shelf frameworks and tools, then you need to know that the framework itself is working the way it is expected to. Tests will provide that.
Tests are a double-check of the code. It is better to test the code again from a different angle than to write a test of the test. I would spend time expanding my test data set or building a fuzz tester or some other such exercise to expand test coverage than spend time writing code to test the tests themselves.
You should formalize the testing process. Having a test plan (even if it is just a list of scenarios with repro steps in a wiki) is a huge step up from where most people are in their testing efforts. You should also have some form of code review for tests before/when they are checked into the code repository. This will help catch little errors like, "Oh ... you didn't realize that when x is true that it always returns five?"
When a test fails, one of the first things that a tester should check is if the test is correct. (Often because that's the first thing the developer is going to insist, that the test is wrong and their code is right.) Swallow your pride and verify the test is 100% correct, then try to figure out if there is a bug in the shipping code. You are using bug tracking software, right? When a test is broken, a bug is filed, isn't it? Sharing these test bugs helps everyone learn how to be better testers.
And finally, when all the tests are passing is when the tester should be the most vigilant! Double-check everything again and use this time to expand the test coverage in new ways, like fuzz testing, model-based testing, etc.

Why using Integration tests instead of unit tests is a bad idea?

Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.

What is the difference between integration and unit tests?

I know the so-called textbook definition of unit tests and integration tests. What I am curious about is when it is time to write unit tests... I will write them to cover as many sets of classes as possible.
For example, if I have a Word class, I will write some unit tests for the Word class. Then, I begin writing my Sentence class, and when it needs to interact with the Word class, I will often write my unit tests such that they test both Sentence and Word... at least in the places where they interact.
Have these tests essentially become integration tests because they now test the integration of these 2 classes, or is it just a unit test that spans 2 classes?
In general, because of this uncertain line, I will rarely actually write integration tests... or is my using the finished product to see if all the pieces work properly the actual integration tests, even though they are manual and rarely repeated beyond the scope of each individual feature?
Am I misunderstanding integration tests, or is there really just very little difference between integration and unit tests?
The key difference, to me, is that integration tests reveal if a feature is working or is broken, since they stress the code in a scenario close to reality. They invoke one or more software methods or features and test if they act as expected.
On the opposite, a Unit test testing a single method relies on the (often wrong) assumption that the rest of the software is correctly working, because it explicitly mocks every dependency.
Hence, when a unit test for a method implementing some feature is green, it does not mean the feature is working.
Say you have a method somewhere like this:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
Log.TrackTheFactYouDidYourJob();
return someResults;
}
DoSomething is very important to your customer: it's a feature, the only thing that matters. That's why you usually write a Cucumber specification asserting it: you wish to verify and communicate the feature is working or not.
Feature: To be able to do something
In order to do something
As someone
I want the system to do this thing
Scenario: A sample one
Given this situation
When I do something
Then what I get is what I was expecting for
No doubt: if the test passes, you can assert you are delivering a working feature. This is what you can call Business Value.
If you want to write a unit test for DoSomething you should pretend (using some mocks) that the rest of the classes and methods are working (that is: that, all dependencies the method is using are correctly working) and assert your method is working.
In practice, you do something like:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
FakeAlwaysWorkingLog.TrackTheFactYouDidYourJob(); // Using a mock Log
return someResults;
}
You can do this with Dependency Injection, or some Factory Method or any Mock Framework or just extending the class under test.
Suppose there's a bug in Log.DoSomething().
Fortunately, the Gherkin spec will find it and your end-to-end tests will fail.
The feature won't work, because Log is broken, not because [Do your job with someInput] is not doing its job. And, by the way, [Do your job with someInput] is the sole responsibility for that method.
Also, suppose Log is used in 100 other features, in 100 other methods of 100 other classes.
Yep, 100 features will fail. But, fortunately, 100 end-to-end tests are failing as well and revealing the problem. And, yes: they are telling the truth.
It's very useful information: I know I have a broken product. It's also very confusing information: it tells me nothing about where the problem is. It communicates me the symptom, not the root cause.
Yet, DoSomething's unit test is green, because it's using a fake Log, built to never break. And, yes: it's clearly lying. It's communicating a broken feature is working. How can it be useful?
(If DoSomething()'s unit test fails, be sure: [Do your job with someInput] has some bugs.)
Suppose this is a system with a broken class:
A single bug will break several features, and several integration tests will fail.
On the other hand, the same bug will break just one unit test.
Now, compare the two scenarios.
The same bug will break just one unit test.
All your features using the broken Log are red
All your unit tests are green, only the unit test for Log is red
Actually, unit tests for all modules using a broken feature are green because, by using mocks, they removed dependencies. In other words, they run in an ideal, completely fictional world. And this is the only way to isolate bugs and seek them. Unit testing means mocking. If you aren't mocking, you aren't unit testing.
The difference
Integration tests tell what's not working. But they are of no use in guessing where the problem could be.
Unit tests are the sole tests that tell you where exactly the bug is. To draw this information, they must run the method in a mocked environment, where all other dependencies are supposed to correctly work.
That's why I think that your sentence "Or is it just a unit test that spans 2 classes" is somehow displaced. A unit test should never span 2 classes.
This reply is basically a summary of what I wrote here: Unit tests lie, that's why I love them.
When I write unit tests I limit the scope of the code being tested to the class I am currently writing by mocking dependencies. If I am writing a Sentence class, and Sentence has a dependency on Word, I will use a mock Word. By mocking Word I can focus only on its interface and test the various behaviors of my Sentence class as it interacts with Word's interface. This way I am only testing the behavior and implementation of Sentence and not at the same time testing the implementation of Word.
Once I've written the unit tests to ensure Sentence behaves correctly when it interacts with Word based on Word's interface, then I write the integration test to make sure that my assumptions about the interactions were correct. For this I supply the actual objects and write a test that exercises a feature that will end up using both Sentence and Word.
My 10 bits :D
I was always told that Unit Tests is the testing of an individual component - which should be exercised to its fullest. Now, this tends to have many levels, since most components are made of smaller parts. For me, a unit is a functional part of the system. So it has to provide something of value (i.e. not a method for string parsing, but a HtmlSanitizer perhaps).
Integration Tests is the next step up, its taking one or more components and making sure they work together as they should.. You are then above the intricacies of worry about how the components work individually, but when you enter html into your HtmlEditControl , it somehow magically knows wether its valid or not..
Its a real movable line though.. I'd rather focus more on getting the damn code to work full stop ^_^
In unit test you test every part isolated:
in integration test you test many modules of your system:
and this what happens when you only use unit tests (generally both windows are working, unfortunately not together):
Sources:
source1
source2
Unit tests use mocks
The thing you're talking about are integration tests that actually test the whole integration of your system. But when you do unit testing you should actually test each unit separately. Everything else should be mocked. So in your case of Sentence class, if it uses Word class, then your Word class should be mocked. This way, you'll only test your Sentence class functionality.
I think when you start thinking about integration tests, you are speaking more of a cross between physical layers rather than logical layers.
For example, if your tests concern itself with generating content, it's a unit test: if your test concerns itself with just writing to disk, it's still a unit test, but once you test for both I/O AND the content of the file, then you have yourself an integration test. When you test the output of a function within a service, it's a unit-test, but once you make a service call and see if the function result is the same, then that's an integration test.
Technically you cannot unit test just-one-class anyway. What if your class is composed with several other classes? Does that automatically make it an integration test? I don't think so.
using Single responsibility design, its black and white. More than 1 responsibility, its an integration test.
By the duck test (looks, quacks, waddles, its a duck), its just a unit test with more than 1 newed object in it.
When you get into mvc and testing it, controller tests are always integration, because the controller contains both a model unit and a view unit. Testing logic in that model, I would call a unit test.
In my opinion the answer is "Why does it matter?"
Is it because unit tests are something you do and integration tests are something you don't? Or vice versa? Of course not, you should try to do both.
Is it because unit tests need to be Fast, Isolated, Repeatable, Self-Validating and Timely and integration tests should not? Of course not, all tests should be these.
It is because you use mocks in unit tests but you don't use them in integration tests? Of course not. This would imply that if I have a useful integration test I am not allowed to add a mock for some part, fear I would have to rename my test to "unit test" or hand it over to another programmer to work on.
Is it because unit tests test one unit and integration tests test a number of units? Of course not. Of what practical importance is that? The theoretical discussion on the scope of tests breaks down in practice anyway because the term "unit" is entirely context dependent. At the class level, a unit might be a method. At an assembly level, a unit might be a class, and at the service level, a unit might be a component.
And even classes use other classes, so which is the unit?
It is of no importance.
Testing is important, F.I.R.S.T is important, splitting hairs about definitions is a waste of time which only confuses newcomers to testing.
The nature of your tests
A unit test of module X is a test that expects (and checks for) problems only in module X.
An integration test of many modules is a test that expects problems that arise from the cooperation between the modules so that these problems would be difficult to find using unit tests alone.
Think of the nature of your tests in the following terms:
Risk reduction: That's what tests are for. Only a combination of unit tests and integration tests can give you full risk reduction, because on the one hand unit tests can inherently not test the proper interaction between modules and on the other hand integration tests can exercise the functionality of a non-trivial module only to a small degree.
Test writing effort: Integration tests can save effort because you may then not need to write stubs/fakes/mocks. But unit tests can save effort, too, when implementing (and maintaining!) those stubs/fakes/mocks happens to be easier than configuring the test setup without them.
Test execution delay: Integration tests involving heavyweight operations (such as access to external systems like DBs or remote servers) tend to be slow(er). This means unit tests can be executed far more frequently, which reduces debugging effort if anything fails, because you have a better idea what you have changed in the meantime. This becomes particularly important if you use test-driven development (TDD).
Debugging effort: If an integration test fails, but none of the unit tests does, this can be very inconvenient, because there is so much code involved that may contain the problem. This is not a big problem if you have previously changed only a few lines -- but as integration tests run slowly, you perhaps did not run them in such short intervals...
Remember that an integration test may still stub/fake/mock away some of its dependencies.
This provides plenty of middle ground between unit tests and system tests (the most comprehensive integration tests, testing all of the system).
Pragmatic approach to using both
So a pragmatic approach would be: Flexibly rely on integration tests as much as you sensibly can and use unit tests where this would be too risky or inconvenient.
This manner of thinking may be more useful than some dogmatic discrimination of unit tests and integration tests.
Unit Testing is a method of testing that verifies the individual units of source code are working properly.
Integration Testing is the phase of software testing in which individual software modules are combined and tested as a group.
Wikipedia defines a unit as the smallest testable part of an application, which in Java/C# is a method. But in your example of Word and Sentence class I would probably just write the tests for sentence since I would likely find it overkill to use a mock word class in order to test the sentence class. So sentence would be my unit and word is an implementation detail of that unit.
I think I would still call a couple of interacting classes a unit test provided that the unit tests for class1 are testing class1's features, and the unit tests for class2 are testing its features, and also that they are not hitting the database.
I call a test an integration test when it runs through most of my stack and even hits the database.
I really like this question, because TDD discussion sometimes feels a bit too purist to me, and it's good for me to see some concrete examples.
I do the same - I call them all unit tests, but at some point I have a "unit test" that covers so much I often rename it to "..IntegrationTest" - just a name change only, nothing else changes.
I think there is a continuation from "atomic tests" (testing one tiny class, or a method) to unit tests (class level) and integration tests - and then functional test (which are normally covering a lot more stuff from the top down) - there doesn't seem to be a clean cut off.
If your test sets up data, and perhaps loads a database/file etc, then perhaps its more of an integration test (integration tests I find use less mocks and more real classes, but that doesn't mean you can't mock out some of the system).
Integration tests: Database persistence is tested.
Unit tests: Database access is mocked. Code methods are tested.
Unit testing is testing against a unit of work or a block of code if you like. Usually performed by a single developer.
Integration testing refers to the test that is performed, preferably on an integration server, when a developer commits their code to a source control repository. Integration testing might be performed by utilities such as Cruise Control.
So you do your unit testing to validate that the unit of work you have built is working and then the integration test validates that whatever you have added to the repository didn't break something else.
Simple Explanation with Analogies
This answer will focus purely on examples.
Integration Tests
Integration tests check if everything is working together.
Unit Tests
They tell you whether one specific thing is working.
Examples
Consider a car:
Integration test for a car: e.g. does the car drive to Pondicherry and back? If so, the car as whole is working. If it fails, you won't really where where: radiator, transmission, engine, or carburettor?
Unit test for a car: Is the engine is working? This tests just the engine; nothing else. If this test fails, then you can be confident that there is a bug in the engine....This ties in closely with the concept of "fakes". You might need some keys in order to start the engine - except, you don't want to go to the hassle of actually an ignition (with a lock)...instead, you would hotwire the car to start it....in other words you would use a "fake" key.
Similarly, in unit testing, you would use "fakes" in order to make the engine work a particular way. And then you could simply test: "is it running".
I call unit tests those tests that white box test a class. Any dependencies that class requires is replaced with fake ones (mocks).
Integration tests are those tests where multiple classes and their interactions are tested at the same time. Only some dependencies in these cases are faked/mocked.
I wouldn't call Controller's integration tests unless one of their dependencies is a real one (i.e. not faked) (e.g. IFormsAuthentication).
Separating the two types of tests is useful for testing the system at different levels. Also, integration tests tend to be long lived, and unit tests are supposed to be quick. The execution speed distinction means they're executed differently. In our dev processes, unit tests are run at check-in (which is fine cos they're super quick), and integration tests are run once/twice per day. I try and run integration tests as often as possible, but usually hitting the database/writing to files/making rpc's/etc slows.
That raises another important point, unit tests should avoid hitting IO (e.g. disk, network, db). Otherwise they slow down alot. It takes a bit of effort to design these IO dependencies out - i can't admit I've been faithful to the "unit tests must be fast" rule, but if you are, the benefits on a much larger system become apparent very quickly.
A little bit academic this question, isn't it? ;-)
My point of view:
For me an integration test is the test of the whole part, not if two parts out of ten are going together.
Our integration test shows, if the master build (containing 40 projects) will succeed.
For the projects we have tons of unit tests.
The most important thing concerning unit tests for me is, that one unit test must not be dependent on another unit test. So for me both test you describe above are unit tests, if they are independent. For integration tests this need not to be important.
Have these tests essentially become integration tests because they now test the integration of these 2 classes? Or is it just a unit test that spans 2 classes?
I think Yes and Yes. Your unit test that spans 2 classes became an integration test.
You could avoid it by testing Sentence class with mock implementation - MockWord class, which is important when those parts of system are large enough to be implemented by different developers. In that case Word is unit tested alone, Sentence is unit tested with help of MockWord, and then Sentence is integration-tested with Word.
Exaple of real difference can be following
1) Array of 1,000,000 elements is easily unit tested and works fine.
2) BubbleSort is easily unit tested on mock array of 10 elements and also works fine
3) Integration testing shows that something is not so fine.
If these parts are developed by single person, most likely problem will be found while unit testing BubbleSoft just because developer already has real array and he does not need mock implementation.
In addition, it's important to remember that both unit tests and integration tests can be automated and written using, for example, JUnit.
In JUnit integration tests, one can use the org.junit.Assume class to test the availability of environment elements (e.g., database connection) or other conditions.
I get asked this a lot in interviews. Until now I'd ramble on pretentiously about my expertise and pontificate about component and acceptance testing.
For years I'd understood only integration and unit tests. I could, but didn't always bother to, write unit tests as a solo developer honing my skills.
Unit tests
That is a crucial difference. Unit tests are easy to implement and execute, requiring, ideally, no dependencies. That is what mocks are for. It is often easier to not mock everything, particularly where you gain coverage of other functions you wrote. Easier, maybe, but that isn't the idea of unit testing.
I'll reiterate, unit tests are meant to be easy to run and small. Their failure provides immediate insight into where a bug has been introduced.
Here is the hierarchy of tests, from cheap and plentiful at the bottom to slow, expensive, and few, at the top:
Several more layers can be conceptualised, but were omitted for clarity.
Integration tests
With integration tests you would consider bringing in serious external dependencies, such as VMs, virtual networks and appliances. Possibly you could use actual modems, routers, and firewalls where the expense was justified.
These wouldn't be run locally but on a build server. A mixture of local Jenkins and cloud based CI providers fulfil this need.
Other test terminology
That is my understanding that has served me for several years in industry. We could talk about component tests, and get a definition, but if the definition isn't in common circulation then it loses value.
Acceptance tests were what we would call business unit or customer requirements. These would lead the direction of everything and sit at the top of the pyramid (picture a dollar sign).
E2E, or end to end testing was used synonymously with integration tests, but I noticed online it is placed above. I guess it could have more relevance to acceptance tests the integration tests, which would tend to be more detailed with less interest from stakeholders (though immense interest internally in the department).
If you're a TDD purist, you write the tests before you write production code. Of course, the tests won't compile, so you first make the tests compile, then make the tests pass.
You can do this with unit tests, but you can't with integration or acceptance tests. If you tried with an integration test, nothing would ever compile until you've finished!