How is unit testing related to code coverage? - unit-testing

I know about code coverage being a metric but it is often mentioned in the artciles about unit testing. But when designing the unit tests, I'm trying to write tests for my business logic and do not care much about coverage. What is the relation then?

The thinking goes as follows:
If the code executed when you run your unit tests covers all the code in the class you test (system under test, SUT), you obviously tested all relevant code.
So, a high code coverage of the SUT is a good thing.
But it also can be misleading. Having a 100% code coverage doesn't mean that you actually tested all your business logic. So, concentrating on testing all your business logic is actually the better approach.
If you tested all your business logic, you will have 100% code coverage - or some code in your business logic that is not needed there.
Still, you can use the code coverage to check if you actually have tested all your business logic.
So, to sum up:
If you don't have 100% code coverage of your SUT, it strongly suggest that you haven't tested your complete business logic
BUT: 100% code coverage doesn't ensure that you tested all of your logic

Literally a "unit test" tests a single unit, or in other words a single component. Thus, unit tests do not necessarily cover business requirements, but rather ensure that every piece of a component does what it is meant for. Code coverage measures the percentage of code that has been tested and thus validated. Every piece of code that has not been tested could probably contain defects so a high code coverage result is desired.
Of course unit tests, or rather the frameworks used to implement unit tests can also be used to test more than one component at once. These tests are more like integration tests, though on a very low level.
The relation between code coverage and integration (or business-logic testing) is that if you have a code coverage of 100% you know that every component does what it is supposed to do. But if you want to ensure that the application does what it is supposed to do, a high code coverage is not nearly enough: For this additional integration test (that test more than one component at a time) are needed.

Unit Testing with code coverage on can be a very powerful tool.
For example. When testing a method you can see if you have hit all the logic paths. If not more testing is required. If it is impossible to hit a section of code, you know you can remove it without any side effects.

Related

Unit Tests become Integration Tests with TDD

Assume that I'm writing an application which uses Test Driven Development.
All the samples that I find are very small examples that try to explain how tests need to be written in TDD.
When you write a test in TDD, you write a very small piece of code whose purpose is to test a single piece of code, a single method, so a unit test.
After some time, a requirement from the client is received and you need to change your original code allowing it to accept much more arguments and splitting the method into multiple methods over multiple layers.
Let's say that logging is added when a failure occurs. What do I need to test then, the logging component separately, or chained together with the original method?
This means that the original unit test is in fact becoming an integration test as I'm testing multiple components together now.
Is this something that should be avoided, or how does one solve those kind of issues if needed?
Kind regards
TDD in the real world actually uses both unit tests and integration tests. Unit tests are seen in tutorials because it's easier to understand simple examples, but real applications need some integration tests. It's typical for the first test you write to be an integration test (see bdd).
However, integration tests are slow and hard to maintain (they touch more of the system than unit tests, so they change more frequently), so it's good to have only as many integration tests as needed and do as much of your testing with unit tests as is reasonable.
When requirements on a class cause it to become larger and you refactor the class into smaller classes, its unit tests are now integration tests. Address this by writing focused unit tests for the new classes and removing most of the old tests for the original class. It may be appropriate to leave behind one or a few of the old tests as integration tests. It also may be appropriate to rewrite some of the old tests to use test doubles (stubs, mocks, etc.) for what are now instances of other classes. Coincidentally, I recently wrote an answer about the mechanics of rewriting tests when you refactor a class out of another class.
In addition to the other answers, you could have a look at the extended TDD cycle as defined in the book Growing Object-Oriented Software Guided by Tests. They are using acceptance tests to drive the inner loop of writing unit tests; depending on the situation, however, I found that you can also use integration tests to do that.
So no need to avoid them. What matters in my experience are the granularity and number of tests (less integration-, more unit tests).
TDD or not, the idea of an unit test is that you isolate an unit of the application and verify its code flows in isolation. An unit is typically a class and you would be looking at atleast one unit test per code branch of a method. E.g. If classA.methodA() has 3 branches, you will have 3+ unit tests for that method.
A true unit test injects the mocked/stubbed dependencies into the component, invokes the method to be tested and verifies its behavior and/or object state. Unit tests in principle should encourage you to improve the design of your source code in terms of loose coupling, separation of concern, etc. (SOLID principles).
Further, code coverage is a good measure of the quality of your unit tests, but striving for 100% isn't advised. Also, writing unit tests for every application layer is overkill; you would want to target layers that contain business logic to achieve a good return-of-investment. Lastly, do not write unit tests without having a Continuous Integration pipeline, since they tend to become stale very quickly.
On the contrary, when you start verifying two or more units as one test, it becomes an integration test since your test result is influenced by the success or failure of each unit. These tend to require more effort to setup the environment, could be flaky due to external dependencies and could be slow based on volume of transactions. These are definitely useful and you should aim for a code coverage based on your budget constraints. Integration tests should also be part of the CI/CD pipeline, but could be run less often than unit tests.
Hope this helps.
There is no real fundamental difference between a unit test and an integration test.
A low level (unit) test of one of your classes will likely also exercise, and rely on, classes provided by your runtime environment or application framework. So your unit test could also be considered as an integration test of the combination of your code with the code of the runtime environment.
With no fundamental difference between them, there is no reason to be concerned if something once labelled "unit test" is now labelled "integration test".

When should I write the different kinds of tests in TDD?

There are different kinds of tests: unit, integration, functional, and acceptance. So if I'm doing proper test-driven development, when do I write each kind of test?
I'm thinking that in typical TDD, the unit tests are the kind of tests that precede the writing of code. The typical workflow I see is:
Write failing unit test
Run test to verify that it fails
Write simplest passing function/method
Run test to verify that it passes
Refactor code
Soooo...where do the integration, functional, and acceptance tests come in? Do you write them after the code? Or do you write them along with the unit test at the very beginning?
Also, as an additional question, I often hear about this "100% code coverage" idea. It's easy to see how this would apply to unit testing--just have one test for every method. But should you aim to have 100% code coverage for each kind of test? For example, should unit tests cover 100% of my code AND functional tests cover 100% of my code (albeit from a more broad perspective)?
While it tends to fit more naturally with lower level tests, TDD is really a mindset that can be applied at any (or all) levels. You could write a failing acceptance test, then write corresponding failing integration tests, break them down into failing unit tests and then "green up" your way back to the original acceptance test as you make each test in the chain pass.
An article that illustrates this : ATDD From the Trenches
Regarding code coverage, in my experience you get most of it from unit tests and/or integration tests, depending on the degree of isolation you like in your testing style. Anyway, I see them as complementary, you shouldn't look for 100% coverage in each test category. Higher-level (system, end to end, acceptance...) tests on the other hand will typically bust configuration/environment problems, which generally doesn't have an impact on code coverage.
I typically first write an external test first that will drive the development of the feature. This approach is part of the London School of TDD.
As highlighted in the above article by Jason Gorman, The London school's definitive text is Growing Object Oriented Software Guided By Tests by Steve Freeman and Nat Pryce.

Integration tests, unit tests and code coverage

I was reviewing some code for a colleague and came across a test in the unit test class that looked this:
// setup
Foo f = ...
FooToBarConverter ftb = ...
Bar b = ftb.Convert(f); // It is easier to create a Bar by converting it from a Foo than making one 'from scratch'
// test
systemUnderTest.DoSomething(bar);
// assert
Assert.IsTrue(...)
Clearly this is an integration test as it is testing the FooToBarConverter as well as the system under test as it is the only test that covers the DoSomething() method. I suggested moving this test to an integration test solution, however this reduces the code coverage of the unit tests. We are aiming for 100% unit test code coverage (and yes - I know that 100% coverage is a means to an end not the end itself, and 100% covered code is not necessarily 100% correct code).
Is there a reason for creating unit tests to bring the coverage back up if we move the integration test out?
Or are we aiming for the wrong thing with 100% unit test coverage? Should we be aiming for 100% coverage with the combination all our tests (or even aiming for 100% at all)?
Thank you.
EDIT/UPDATE:
This is not a question about how to unit test the system under test properly (I know the reasons that this is not a unit test, and I know how to properly convert it into a unit test), nor is it a question on coverage on FooToBarConverter. I want opinions on code coverage on the system under test: are integration tests on the system under test sufficient? or should there also be unit tests?
I think the answer here is "it depends".
If you have full unit test coverage on the FooToBarConverter class then probably you are OK just with the integration test of systemUnderTest because you can say with confidence that the real FooToBarConverter behaves as expected in this context and therefore does not incorrectly influence the result of the test.
On the other hand, it's unclear specifically what this test is checking for - are you examining the behaviour of systemUnderTest when given a valid FooToBarConverter, or some other expected side effect within systemUnderTest to which FooToBarConverter is a purely coincidental actor? (i.e are you sure that this isn't an indirect test of bar?)
Now personally I would recommend that you also do a proper, "pure" unit test (using a mock or stub of FooToBarConverter) for systemUnderTest because
it will make regressions easier to manage; suppose that in the future some change to FooToBarConverter makes its unit tests fail - they will quite possibly also therefore make this integration test fail. That could be confusing for someone looking at failed tests and not knowing that the integration test failure can be ignored and that only the FooToBarConverter tests need to be fixed. It's a small thing, I know, but it might save 5 important minutes some day :)
How do you test the negative cases (behaviour of systemUnderTest when given a broken/invalid/null FooToBarConverter)? Since you'll probably have to write unit tests with stubs/mocks for these kind of cases anyway, you might as well have a unit test for the good case in the same project/test class as well, it's much clearer - otherwise you have to aggregate code coverage across both unit test and integration test projects to verify that systemUnderTest is fully covered...
Also, don't worry about 100% code coverage; it's nice to have but in practice it's rare to see it. I don't mean this as a sop to good design practices either; the simple reality is that no design is 100% perfect and therefore it's to be expected that there are times when you just don't have the time/resource/will to refactor your classes to allow every single dependency to be injected, or to be able to use interfaces for every inter-seam interaction, etc.
Hope that helps.

Why using Integration tests instead of unit tests is a bad idea?

Let me start from definition:
Unit Test is a software verification and validation method in which a programmer tests if individual units of source code are fit for use
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group.
Although they serve different purposes very often these terms are mixed up. Developers refer to automated integration tests as unit tests. Also some argue which one is better which seems to me as a wrong question at all.
I would like to ask development community to share their opinions on why automated integration tests cannot replace classic unit tests.
Here are my own observations:
Integration tests can not be used with TDD approach
Integration tests are slow and can not be executed very often
In most cases integration tests do not indicate the source of the problem
it's more difficult to create test environment with integration tests
it's more difficult to ensure high coverage (e.g. simulating special cases, unexpected failures etc)
Integration tests can not be used with Interaction based testing
Integration tests move moment of discovering defect further (from paxdiablo)
EDIT: Just to clarify once again: the question is not about whether to use integration or unit testing and not about which one is more useful. Basically I want to collect arguments to the development teams which write ONLY integration tests and consider them as unit tests.
Any test which involve components from different layers is considered as integration test. This is to compare to unit test where isolation is the main goal.
Thank you,
Andrey
Integration tests tell you whether it's working. Unit tests tell you what isn't working. So long as everything is working, you "don't need" the unit tests - but once something is wrong, it's very nice to have the unit test point you directly to the problem. As you say, they serve different purposes; it's good to have both.
To directly address your subject: integration tests aren't a problem, aren't the problem. Using them instead of unit tests is.
There have been studies(a) that show that the cost of fixing a bug becomes higher as you move away from the point where the bug was introduced.
For example, it will generally cost you relatively little to fix a bug in software you haven't even pushed up to source control yet. It's your time and not much of it, I'd warrant (assuming you're any good at your job).
Contrast that with how much it costs to fix when the customer (or all your customers) find that problem. Many level of people get involved and new software has to be built in a hurry and pushed out to the field.
That's the extreme comparison. But even the difference between unit and integration tests can be apparent. Code that fails unit testing mostly affects only the single developer (unless other developers/testers/etc are waiting on it, of course). However, once your code becomes involved in integration testing, a defect can begin holding up other people on your team.
We wouldn't dream of replacing our unit tests with integration tests since:
Our unit tests are automated as well so, other than initial set-up, the cost of running them is small.
They form the beginning of the integration tests. All unit tests are rerun in the integration phase to check that the integration itself hasn't broken anything, and then there are the extra tests that have been added by the integration team.
(a) See, for example, http://slideshare.net/Vamsipothuri/defect-prevention, slide # 5, or search the net for Defect prevention : Reducing costs and enhancing quality. Th graph from the chart is duplicated below in case it ever becomes hard to find on the net:
I find integration tests markedly superior to unit tests. If I unit test my code, I'm only testing what it does versus my understanding of what it should do. That only catches implementation errors. But often a much bigger problem is errors of understanding. Integration tests catch both.
In addition, there is a dramatic cost difference; if you're making intensive use of unit tests, it's not uncommon for them to outweigh all the rest of your code put together. And they need to be maintained, just like the rest of the code does. Integration tests are vastly cheaper -- and in most cases, you already need them anyway.
There are rare cases where it might be necessary to use unit tests, e.g. for internal error handling paths that can't be triggered if the rest of the system is working correctly, but most of the time, integration tests alone give better results for far lower cost.
Integration tests are slow.
Integration tests may break different
reasons (it is not focused and
isolated). Therefore you need more
debugging on failures.
Combination of
scenarios are to big for integration
test when it is not unit tested.
Mostly I do unit tests and 10 times less integration tests (configuration, queries).
In many cases you need both. Your observations are right on track as far as I'm concerned with respect to using integration tests as unit tests, but they don't mean that integration tests are not valuable or needed, just that they serve a different purpose. One could equally argue that unit tests can't replace integration tests, precisely because they remove the dependencies between objects and they don't exercise the real environment. Both are correct.
It's all about reducing the iteration time.
With unit tests, you can write a line of code and verify it in a minute or so. With integration tests, it usually takes significantly longer (and the cost increases as the project grows).
Both are clearly useful, as both will detect issues that the other fails to detect.
OTOH, from a "pure" TDD approach, unit tests aren't tests, they're specifications of functionality. Integration tests, OTOH, really do "test" in the more traditional sense of the word.
Integration testing generally happens after unit testing. I'm not sure what value there is in testing interactions between units that have not themselves been tested.
There's no sense in testing how the gears of a machine turn together if the gears might be broken.
The two types of tests are different. Unit tests, in my opinion are not a alternative to integration tests. Mainly because integration tests are usually context specific. You may well have a scenario where a unit test fails and your integration doesn't and vice versa. If you implement incorrect business logic in a class that utilizes many other components, you would want your integration tests to highlight these, your unit tests are oblivious to this.I understand that integration testing is quick and easy. I would argue you rely on your unit tests each time you make a change to your code-base and having a list of greens would give you more confidence that you have not broken any expected behavior at the individual class level. Unit tests give you a test against a single class is doing what it was designed to do. Integration tests test that a number of classes working together do what you expect them to do for that particular collaboration instance. That is the whole idea of OO development: individual classes that encapsulate particular logic, which allows for reuse.
I think coverage is the main issue.
A unit test of a specific small component such as a method or at most a class is supposed to test that component in every legal scenario (of course, one abstracts equivalence classes but every major one should be covered). As a result, a change that breaks the established specification should be caught at this point.
In most cases, an integration uses only a subset of the possible scenarios for each subunit, so it is possible for malfunctioning units to still produce a program that initially integrates well.
It is typically difficult to achieve maximal coverage on the integration testing for all the reasons you specified below. Without unit tests, it is more likely that a change to a unit that essentially operates it in a new scenario would not be caught and might be missed in the integration testing. Even if it is not missed, pinpointing the problem may be extremely difficult.
I am not sure that most developers refer to unit tests as integration tests. My impression is that most developers understand the differences, which does not mean they practice either.
A unit test is written to test a method on a class. If that class depends on any kind of external resource or behavior, you should mock them, to ensure you test just your single class. There should be no external resources in a unit test.
An integration test is a higher level of granularity, and as you stated, you should test multiple components to check if they work together as expected. You need both integration tests and unit tests for most projects. But it is important they are kept separate and the difference is understood.
Unit tests, in my opinion, are more difficult for people to grasp. It requires a good knowledge of OO principles (fundamentally based on one class one responsibility). If you are able to test all your classes in isolation, chances are you have a well design solution which is maintainable, flexible and extendable.
When you check-in, your build server should only run unit tests and
they should be done in a few seconds, not minutes or hours.
Integration tests should be ran overnight or manually as needed.
Unit tests focus on testing an individual component and do not rely on external dependencies. They are commonly used with mocks or stubs.
Integration tests involve multiple components and may rely on external dependencies.
I think both are valuable and neither one can replace the other in the job they do. I do see a lot of integration tests masquerading as unit tests though having dependencies and taking a long time to run. They should function separately and as part of a continuous integration system.
Integration tests do often find things that unit tests do not though...
Integration tests let you check that whole use cases of your application work.
Unit tests check that low-level logic in your application is correct.
Integration tests are more useful for managers to feel safer about the state of the project (but useful for developers too!).
Unit tests are more useful for developers writing and changing application logic.
And of course, use them both to achieve best results.
It is a bad idea to "use integration tests instead of unit tests" because it means you aren't appreciating that they are testing different things, and of course passing and failing tests will give you different information. They make up sort of a ying and yang of testing as they approach it from either side.
Integration tests take an approach that simulates how a user would interact with the application. These will cut down on the need for as much manual testing, and passing tests will can tell you that you app is good to go on multiple platforms. A failing test will tell you that something is broken but often doesn't give you a whole lot of information about what's wrong with the underlying code.
Unit tests should be focusing on making sure the inputs and outputs of your function are what you expect them to be in all cases. Passing units tests can mean that your functions are working according to spec (assuming you have tests for all situations). However, all your functions working properly in isolation doesn't necessarily mean that everything will work perfectly when it's deployed. A failing unit test will give you detailed, specific information about why it's failing which should in theory make it easier to debug.
In the end I believe a combination of both unit and integration tests will yield the quickest a most bug-free software. You could choose to use one and not the other, but I avoid using the phrase "instead of".
How I see integration testing & unit testing:
Unit Testing: Test small things in isolation with low level details including but not limited to 'method conditions', checks, loops, defaulting, calculations etc.
Integration testing: Test wider scope which involves number of components, which can impact the behaviour of other things when married together. Integration tests should cover end to end integration & behaviours. The purpose of integration tests should be to prove systems/components work fine when integrated together.
(I think) What is referred here by OP as integration tests are leaning more to scenario level tests.
But where do we draw the line between unit -> integration -> scenario?
What I often see is developers writing a feature and then when unit testing it mocking away every other piece of code this feature uses/consumes and only test their own feature-code because they think someone else tested that so it should be fine. This helps code coverage but can harm the application in general.
In theory the small isolation of Unit Test should cover a lot since everything is tested in its own scope. But such tests are flawed and do not see the complete picture.
A good Unit test should try to mock as least as possible. Mocking API and persistency would be something for example. Even if the application itself does not use IOC (Inversion Of Control) it should be easy to spin up some objects for a test without mocking if every developer working on the project does it as well it gets even easier. Then the test are useful. These kind of tests have an integration character to them aren't as easy to write but help you find design flaws of your code. If it is not easy to test then adapt your code to make it easy to test. (TDD)
Pros
Fast issue identification
Helps even before a PR merge
Simple to implement and maintain
Providing a lot of data for code quality checking (e.g. coverage etc.)
Allows TDD (Test Driven Development)
Cons
Misses scenario integration errors
Succumbs to developer blindness in their own code(happens to all of us)
A good integration test would be executed for complete end to end scenarios and even check persistency and APIs which the unit test could not cover so you might know where to look first when those fail.
Pros:
Test close to real world e2e scenario
Finds Issues that developers did not think about
Very helpful in microservices architectures
Cons:
Most of the time slow
Need often a rather complex setup
Environment (persistency and api) pollution issues (needs cleanup steps)
Mostly not feasible to be used on PR's (Pull Requests)
TLDR: You need both you cant replace one with the other! The question is how to design such tests to get the best from both. And not just have them to show good statistics to the management.

What are key points to explain Unit Testing

I want to introduce Unit Testing to some colleagues that have no or little experience with Unit Testing. I'll start with a presentation of about an hour to explain the concept and give lots of examples. I'll follow up with pair programming sessions and code reviews.
What are the key points that should be focussed on at the intrduction?
To keep it really short: Unit testing is about two things
a tool for verifying intentions
a necessary safety net for refactoring
Obviously, it is a lot more than that, but to me that pretty much the sums it up.
Unit tests test small things
Another thing to remember is that unit tests test small things, "units". So if your test runs against a resource like a live server or a database, most people call that a system or integration test. To unit test just the code that talks to a resource like that, people often use mock objects (often called mocks).
Unit tests should run fast and be run often
When unit tests test small things, the tests run fast. That's a good thing. Frequently running unit tests helps you catch problems soon after the occur. The ultimate in frequently running unit tests is having them automated as part of continuous integration.
Unit tests work best when coverage is high
People have different views as to whether 100% unit test coverage is desirable. I'm of the belief that high coverage is good, but that there's a point of diminishing return. As a very rough rule of thumb, I would be happy with a code base that had 85% coverage with good unit tests.
Unit tests aren't a substitute for other types of tests
As important as unit tests are, other types of testing, like integration tests, acceptance tests, and others can also be considered parts of a well-tested system.
Unit testing existing code poses special challenges
If you're looking to add unit tests to existing code, you may want to look at Working Effectively with Legacy Code by Michael Feathers. Code that wasn't designed with testing in mind may have characteristics that make testing difficult and Feathers writes about ways of carefully refactoring code to make it easier to test. And when you're familiar with certain patterns that make testing code difficult, you and your team can write code that tries to avoid/minimize those patterns.
You might get some inspiration here too https://stackoverflow.com/questions/581589/introducing-unit-testing-to-a-wary-team/581610#581610
Remember to point out that Unit Testing is not a silver bullet and shouldn't replace other forms of traditional testing (Functional Tests etc) but should be used in conjunction.
Unit testing works better in some areas than others, so the only way to have truly comprehensive testing is to combine it with other forms.
This seems to be one of the biggest criticisms I see of Unit Testing as a lot of people don't seem to 'get' that it shouldn't be replacing other forms of testing in totality.
Main points:
unit tests help both design (by expressing intent) and regression test (by never going away) code;
unit tests are for lazy programmers who don't want to debug their code again;
tests have no business of influencing or affecting business logic and functionality, but they do test it fully;
unit tests demand the same qualities as regular code: theory and strategy, organization, patterns, smells, refactoring;
Unit tests should be FAIR.
F Fast
A Can be easily Automated
I Can be run Independently
R Repeatable