Is unit testing always white box testing? [duplicate] - unit-testing

Does "Unit Testing" fall under white box or black box testing? Or is it totally a separate type of testing than the other two?

I think this article by Kent Beck referring more to TDD and unit-testing sums this up fairly well. Basically, it depends on how you actually write the tests*. Here is another article on the subject that might help clarify things.
*If you are testing from within your application, then it is whitebox. If you are testing it just like an outsider would make the calls to only your public facing API, then it is blackbox.

The usual criteria for white-box testing is execution path and data structure sensitization. These are sometimes called "branch testing", "path testing", "data flow testing". See Wikipedia on white-box testing.
That is, unit-test refers to the level at which the test takes place in the structure of the system, whereas white- and black-box testing refer to whether, at any level, the test approach is based on the internal design or only on the external specification of the unit.
So if your unit-test sensitizes all of the execution paths and data structures in the unit that you are testing, then it is a white-box test. However, if your unit cannot sensitize most of the paths and data structures of the unit, then it can't claim to be a white-box test.
Be advised that in some organizations unit-testing is called white-box testing regardless of whether the unit-test is based on the unit's design rather than just its API. Best not to argue with your boss on this point.

Related

Does "Unit Testing" falls under white box or black box testing?

Does "Unit Testing" fall under white box or black box testing? Or is it totally a separate type of testing than the other two?
I think this article by Kent Beck referring more to TDD and unit-testing sums this up fairly well. Basically, it depends on how you actually write the tests*. Here is another article on the subject that might help clarify things.
*If you are testing from within your application, then it is whitebox. If you are testing it just like an outsider would make the calls to only your public facing API, then it is blackbox.
The usual criteria for white-box testing is execution path and data structure sensitization. These are sometimes called "branch testing", "path testing", "data flow testing". See Wikipedia on white-box testing.
That is, unit-test refers to the level at which the test takes place in the structure of the system, whereas white- and black-box testing refer to whether, at any level, the test approach is based on the internal design or only on the external specification of the unit.
So if your unit-test sensitizes all of the execution paths and data structures in the unit that you are testing, then it is a white-box test. However, if your unit cannot sensitize most of the paths and data structures of the unit, then it can't claim to be a white-box test.
Be advised that in some organizations unit-testing is called white-box testing regardless of whether the unit-test is based on the unit's design rather than just its API. Best not to argue with your boss on this point.

How to write good Unit Tests? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Could anyone suggest books or materials to learn unit test?
Some people consider codes without unit tests as legacy codes. Nowadays, Test Driven Development is the approach for managing big software projects with ease. I like C++ a lot, I learnt it on my own without any formal education. I never looked into Unit Test before, so feel left out. I think Unit Tests are important and would be helpful in the long run. I would appreciate any help on this topic.
My main points of concern are:
What is a Unit Test? Is it a comprehensive list of test cases which should be analyzed? So lets us a say i have a class called "Complex Numbers" with some methods in it (lets says finding conjugate, an overloaded assignment operator and an overloaded multiplication operator. What should be typical test cases for such a class? Is there any methodology for selecting test cases?
Are there any frameworks which can create unit tests for me or i have to write my own class for tests? I see an option of "Test" in Visual Studio 2008, but never got it working.
What is the criteria for Units tests? Should there be a unit test for each and every function in a class? Does it make sense to have Unit Tests for each and every class?
An important point (that I didn't realise in the beginning) is that Unit Testing is a testing technique that can be used by itself, without the need to apply the full Test Driven methodology.
For example, you have a legacy application that you want to improve by adding unit tests to problem areas, or you want to find bugs in an existing app. Now you write a unit test to expose the problem code and then fix it. These are semi test-driven, but can completely fit in with your current (non-TDD) development process.
Two books I've found useful are:
Test Driven Development in Microsoft .NET
A very hands on look at Test Driven development, following on from Kent Becks' original TDD book.
Pragmatic Unit Testing with C# and nUnit
It comes straight to the point what unit testing is, and how to apply it.
In response to your points:
A Unit test, in practical terms is a single method in a class that contains just enough code to test one aspect / behaviour of your application. Therefore you will often have many very simple unit tests, each testing a small part of your application code. In nUnit for example, you create a TestFixture class that contains any number of test methods. The key point is that the tests "test a unit" of your code, ie a smallest (sensible) unit as possible. You don't test the underlying API's you use, just the code you have written.
There are frameworks that can take some of the grunt work out of creating test classes, however I don't recommmend them. To create useful unit tests that actually provide a safety net for refactoring, there is no alternative but for a developer to put thought into what and how to test their code. If you start becoming dependent on generating unit tests, it is all too easy to see them as just another task that has to be done. If you find yourself in this situation you're doing it completely wrong.
There are no simple rules as to how many unit tests per class, per method etc. You need to look at your application code and make an educated assessment of where the complexity exists and write more tests for these areas. Most people start by testing public methods only because these in turn usually exercise the remainder of the private methods. However this is not always the case and sometimes it is necessary to test private methods.
In short, even experienced unit testers start by writing obvious unit tests, then look for more subtle tests that become clearer once they have written the obvious tests. They don't expect to get every test up-front, but instead add them as they come to their mind.
While you've already accepted an answer to your question I'd like to recommend a few other books not yet mentioned:
Working Effectively with Legacy Code - Michael Feathers - As far as I know this is the only book to adequately tackle the topic of turning existing code that wasn't designed for testability into testable code. Written as more of a reference manual, its broken down into three sections: An overview of the tools and techniques, A series of topical guides to common road blocks in legacy code, A set of specific dependency breaking techniques referenced throughout the rest of the book.
Agile Principles, Patterns, and Practices - Robert C. Martin - Examples in java, there is a sequel with examples in C#. Both are easy to adapt to C++
Clean Code:A Handbook of Agile Software Craftsmanship - Robert C. Martin - Martin describes this as a prequel to his APPP books and I would agree. This book makes a case for professionalism and self-discipline, two essential qualities in any serious software developer.
The two books by Robert (Uncle Bob) Martin cover much more material than just Unit testing but they drive home just how beneficial unit testing can be to code quality and productivity. I find myself referring to these three books on a regular basis.
In .NET I strongly recommend "The Art of Unit Testing" by Roy Osherove, it is very comprehensive and full of good advice.
Nowadays, Test Driven Development is
the approach for managing big software
projects with ease.
That is because TDD allows you to make sure after each change that everything that worked before the change still works, and if it doesn't it allows you to pinpoint what was broken, much easier. (see at the end)
What is a Unit Test? Is it a
comprehensive list of test cases which
should be analyzed?
A Unit Test is a piece of code that asks a "unit" of your code to perform an operation, then verifies that the operation was indeed performed and the result is as expected. If the result is not correct, it raises / logs an error.
So lets us a say i have a class called
"Complex Numbers" with some methods in
it (lets says finding conjugate, an
overloaded assignment operator and an
overloaded multiplication operator.
What should be typical test cases for
such a class? Is there any methodology
for selecting test cases?
Ideally, you would test all the code.
when you create an instance of the
class, it is created with the correct
default values
when you ask it to find the
conjugates, it does finds the correct
ones (also test border cases, like the
conjugate for zero)
when you assign a value the value is
assigned and displayed correctly
when you multiply a complex by a
value, it is multiplied correctly
Are their any frameworks which can
create unit tests for me or i have to
write my own class for tests?
See CppUnit
I see an option of "Test" in Visual
Studio 2008, but never got it working.
Not sure on that. I haven't used VS 2008 but it may be available just for .NET.
What is the criteria for Units tests?
Should there be a unit test for each
and every function in a class? Does it
make sense to have Unit Tests for each
and every class?
Yes, it does. While that is an awful lot of code to write (and maintain with every change) the price is worth paying for large projects: It guarantees that your changes to the code base do what you want them to and nothing else.
Also, when you make a change, you need to update the unit-tests for that change (so that they pass again).
In TDD, you first decide what you want the code to do (say, your complex numbers class), then write the tests that verify those operations, then write the class so that the tests compile and execute correctly (and nothing more).
This ensures that you write the minimal code possible (and don't over-complicate the design of the complex class) and it also ensures that your code does what it does. At the end of writing the code, you have a way to test it's functionality and ensuring it's correctness.
You also have an example of using the code that you will be able to access at any point.
For further reading/documentation, look into "dependency injection" and method stubs as used in unit testing and TDD.
With test driven design, you normally want to write the tests first. They should cover the operations you're actually using/going to use. I.e. unless they're necessary for the client code to do its job, they shouldn't exist. Selecting test cases is something of an art. There are obvious things like testing boundary conditions, but in the end, nobody's found a really reliable, systematic way of assuring that tests (unit or otherwise) cover all the conditions that matter.
Yes, there are frameworks. A couple of the best known are:
Boost Unit Test Framework
CPPUNit
CPPUnit is a port of JUnit, so those who've used JUnit previously will probably find it comfortable. Otherwise, I'd tend to recommend Boost -- they also have a Test Library to help write the individual tests -- rather a handy addition.
Unit tests should be sufficient to ensure that the code works. If (for example) you have a private function that's used internally, you generally don't need to test it directly. Instead, you test whatever provides the public interface. As long as that works correctly, it's no business of the outside world how it does its job. Of course, in some cases it's easier to test little pieces, and when it is, that's perfectly legitimate -- but ultimately you care about the visible interface, not the internals. Certainly the whole external interface should be exercised, and test cases generally chosen to exercise the paths through the code. Again, there's nothing massively different about unit tests versus other kinds. It's mostly just a more systematic way of applying normal testing techniques.
Unit tests are simply a way to exercise a given body of code to ensure that a defined set of conditions leads to the expected set of out comes. As Steven points out, these "exercises" should check across a range of criteria ("BICEP"). Yes, ideally you should test all of your classes and all of the methods in these classes although there is always some room for judgement: testing shouldn't be an end in itself but rather should support the wider project goals.
Ok, so...theory is nice but to really understand Unit Testing, my recommendation would be to pull together the appropriate tools and just get started. Like most things in programming, if you have the right tools, it is easy to learn by doing.
First, pick up a copy of NUnit. It is free, easy to install and easy to work with. If you'd like some documentation, check out Pragmatic Unit Testing in C# with NUnit.
Next, go to http://www.testdriven.net/ and get a copy of TestDriven.net. It installs into Visual Studio 2008 and gives you right-click access to a full range of testing tools including the ability to run NUnit tests against a file, directory or project (typically, tests are written in a separate project). You can also run tests with debugging or, coolest of all, run all the tests against a copy of NCover. NCover will show you exactly what code is being exercised so you can figure out where you need to improve your test coverage. TestDriven.net costs $170 for a professional license but, if you are like me, it will very quickly become an integral tool in your toolbox. Anyway, I've found it to be an excellent professional investment.
Good luck!
I can't answer your question for Visual Studio 2008, but I know that Netbeans has a few integrated tools for you to use.
The code coverage two allows for you to see which paths have been checked, and how much of the code is actually covered by the unit tests.
It has the support for the unit tests built in.
As far as quality of tests I'm borrowing a bit from the "Pragmatic Unit Testing in Java with JUnit" by Andrew Hunt and David Thomas:
Unit testing should check for BICEP:
Boundary, Inverse relationships, Cross-checking, Error conditions, and Performance.
Also quality of the tests are determined by A-TRIP:
Automatic, Thorough, Repeatable, Independent, and Professional.
Here's something on when not to write unit tests (i.e. on when it's viable and even preferable to skip unit testing): Should one test internal implementation, or only test public behaviour?
The short answer is:
When you can automate integration tests (because it's important to have automated tests, but those tests don't have to be unit tests)
When it's cheap to run the integration test suite (no good if it takes two days to run, or if you can't afford to let every developer have access to an integration test equipment)
When it isn't necessary to find bugs before integration testing (which depends in part on whether components are developed separately or incrementally)
Buy the book "xUnit Test Patterns: Refactoring Test Code". Its very excellent. It does cover high-level strategy decisions as well as low level test patterns.
Nowadays, Test Driven Development is the approach for managing big software projects with ease.
TDD built on unit tests but they are different. You don't need to be use TDD to make use of unit tests. My personal preference is to write test first, but I don't feel I do the whole TDD thing.
What is a Unit Test?
A Unit Test is a bit of code that tests the behaviour of one unit. How one unit is defined differs between people. But in general they are:
Quick to run
Independent from each other
Test only a small part (a unit ;) of your code base.
Binary outcome - That is it passes or fails.
Should only test one outcome of the unit (for each outcome create a different unit test)
Repeatable
Are their any frameworks which can create unit tests
To write the tests - Yes but I've never seen anyone say anything nice about them.
To help you write & run tests, a whole bunch of them.
Should there be a unit test for each and every function in a class?
You have a few different camps in this - the 100%ers would say yes. Every method must be tested and you should have 100% code coverage. The other extreme is that unit tests should only cover areas that you have even encounter bugs or you expect to find bugs. The middle ground (and the stand I take) is to unit tests everything that is not "too simple to break". Setters/getters and anything that just calls a single other method. I aim to have 80% code coverage and a low CRAP factor (so a low chance I've been naughty and decided to not test something as it was "too complex to test).
The book that helped me "get" unit tests JUnit in Action. Sorry I don't do much in the C++ world, so I can not suggest a C++ based alternative.

What are unit testing and integration testing, and what other types of testing should I know about?

I've seen other people mention several types of testing on Stack Overflow.
The ones I can recall are unit testing and integration testing. Especially unit testing is mentioned a lot. What exactly is unit testing? What is integration testing? What other important testing techniques should I be aware of?
Programming is not my profession, but I would like it to be some day;stuff about production etc is welcomed too.
Off the top of my head:
Unit testing in the sense of "testing the smallest isolatable unit of an application"; this is typically a method or a class, depending on scale.
Integration testing
Feature testing: this may cut across units, and is the focus of TDD.
Black-box testing: testing only the public interface with no knowledge of how the thing works.
Glass-box testing: testing all parts of a thing with full knowledge of how it works.
Regression testing: test-cases constructed to reproduce bugs, to ensure that they do not reappear later.
Pointless testing: testing the same basic case more than one way, or testing things so trivial that they really do not need to be tested (like auto-generated getters and setters)
should I be aware of any other important testing of my code?
These are some of the different kinds of test, according to different phases of the software lifecycle:
Unit test: does this little bit of code work?
Unit test suite: a sequence of many unit tests (for many little bits of code)
Integration test: test whether two components work together when they're combined (or 'integrated')
System test: test whether all components work together when they're combined (or 'integrated')
Acceptance test: what the customer does to decide wheher he wants to pay you (system test discovers whether the software works as designed ... acceptance test discovers whether "as-designed" is what the customer wanted)
There's more:
Usability test
Performance test
Load test
Stress test
And, much more ... testing software is nearly as wide a subject as writing software.
MSDN: Unit Testing
The primary goal of unit testing is to
take the smallest piece of testable
software in the application, isolate
it from the remainder of the code, and
determine whether it behaves exactly
as you expect. Each unit is tested
separately before integrating them
into modules to test the interfaces
between modules. Unit testing has
proven its value in that a large
percentage of defects are identified
during its use.
MSDN: Integration Testing
Integration testing is a logical
extension of unit testing. In its
simplest form, two units that have
already been tested are combined into
a component and the interface between
them is tested. A component, in this
sense, refers to an integrated
aggregate of more than one unit. In a
realistic scenario, many units are
combined into components, which are in
turn aggregated into even larger parts
of the program. The idea is to test
combinations of pieces and eventually
expand the process to test your
modules with those of other groups.
Eventually all the modules making up a
process are tested together. Beyond
that, if the program is composed of
more than one process, they should be
tested in pairs rather than all at
once.
Check sites for more information. There is plenty of information out there as well from sources other than Microsoft.
The other important technique is regression testing. In this technique, you maintain a suite of tests (called the regression suite), which are usually run nightly as well as before every checkin. Every time you have a bug fix you add one or more tests to the suite. The purpose is to stop you from re-introducing old bugs that have already been fixed. (The problem is surprisingly common!)
Start accumulating your regression suite early, before your project gets big, or you'll regret it. I surely have!
There are different levels of testing corresponding to what stage you are at in the software development life cycle. The highest level is requirements analysis and the lowest level is implementation of the solution.
What is unit testing?
Unit testing assesses the software in regards to its implementation.
We focus on testing units of a program (ie. individual methods) and disregard who calls/uses these units. Hence, we essentially treat each unit as a stand alone unit.
There are many unit testing tools, one of the most popular one is JUnit.
When performing unit testing we want to construct a test set (set of test cases) that satisfy a certain coverage criteria. This could be some structural coverage criteria (NC, EC, PPC etc.) or data flow criteria (ADC, AUC, ADUPC etc.)
Note that unit testing is the lowest level of testing because it assess the actual code units produced after implementation. This type of testing that is done before integration testing.
Efficient unit testing helps ensure that integration testing will not be painful, since it is cheaper and easier to spot and fix bugs when doing unit testing
What is integration testing?
Integration testing is needed to ensure that our software still work when two or more components are combined.
You can perform integration testing before the system is complete.
The class integration test order (CITO) problem is associated with integration testing. CITO has to do with the strategy of integrating components. There are many proposed solutions to CITO such as top down integration, bottom up integration etc. The main goal is to integrate components in a way that enables efficient testing and the least amount of stubbing, since writing code stubs is not always easy and takes time. Note that this is still an active area of research!
Integration testing is done after unit testing.
It is often the case that the individual components work fine but when everything is put together, we suddenly see bugs appearing due to incompatibilities/issues with interfaces.
Other levels of testing include:
Regression Testing
This type of testing is given a lot of importance because developers commit changes to software quite often typically, so we want to make sure that these changes do not introduce bugs.
The key for effective regression testing is to limit the size of the regression tests so that it does not take too long to finish testing otherwise the test set will not finish running before the next commit. We must also pick effective test cases that will not miss bugs.
This type of testing should be automated.
It is important to note that we can always keep on adding more machines in order to counteract the increasing amount of regression tests but at some point the tradeoff is not worth it.
Acceptance Testing
With this testing we assess software in relation to the provided requirements, basically we see if the software we have produced meets the requirements we were given.
This is usually the last type of testing done in the sequence of software development activities. In consequence this type of testing is done after unit testing and integration testing.
Unit testing is simply the idea of writing (hopefully) small blocks of code to test independent parts of your application.
For example, you might have a calculator application and you need to make sure the addition function works. To do this you write a separate application that calls the addition function directly. Then your test function will evaluate the result to see if it jives with what you expected.
It's basically calling your functions with known inputs and verifying the output is exactly what you expected.
First two search results on google for 'types of testing' look comprehensive
www.aptest.com/testtypes.html
www.softwaretestinghelp.com/types-of-software-testing/
The ones I think are most relevant. See here.
This was an entry I wrote: Different Types of Automated Tests.
Unit Testing: The testing done to a unit or to a smallest piece of software. Done to verify if it satisfies its functional specification or its intended design structure.
Integration Testing: Testing the related modules together for its combined functionality.
Regression Testing: Testing the application to verify that modifications have not caused unintended effects.
Smoke Testing: Smoke testing is verified whether the build is testable or not.

What does unit testing mean to you?

G'day,
I am working with a group of offshore developers who have been using the term unit testing quite loosely.
Their QA document talks about writing unit tests and then performing unit testing of the system.
This doesn't line up with my interpretation of what unit testing is at all.
I am used to unit testing being a test or suite of tests that are being used to exercise a single class, usually as a black box. The class under test may require other classes to be included by the implementation but generally it is a single class that is being exercised by the unit test(s).
Then you have system functional testing, intergration testing, acceptance testing, etc.
I want to know is this a bit pedantic on my part? Or is this what you think of when referring to unit tests and unit testing?
Edit: Rob Wells. I need to clarify that approaching such testing from a black box perspective is only one aspect. When using mock objects to verify internal behaviours, you are really testing from a white box perspective because you know what you want to happen inside the box.
Unit tests are generally used by developers to test isolated sections of code. They cover border cases, error cases, and normal cases. They are intended to demonstrate the correctness of a limited segment of code. If all of your unit tests pass, then you have demonstrated that your isolated segments of code do what they are supposed to do.
When you do integration testing, you are looking at end-to-end cases, to see if all the segments that have passed unit testing work together. Functional testing checks to see if the code meets the requirements as specified. Acceptance testing is done by end users to see if they approve of the final product.
I try to implement unit tests to test only a single method. and I make an effort to crete "mock" classes for dependant classes and methods used by the method I am testing...
... so that the exercise of the code in that method does not in fact call code in other methods the unit test is not supposed to be "Testing" (There are other unit tests for those methods) This way, a failure of the unit test reliably indicates a failure of the method the unit test is testing...
Mock classes are designed to "simulate" the interface and behavior of dependant classes so that the method I am testing can call them and they will behave ina standard, well-defined way according to system requirements. In order to make this approach work, calls to such dependant classes and to their methods must be made on a well defined interface, so that the "tester" process can "inject" the Mock version of teh dependant class into the class being tested instead of the actual production version... . This is kinda like a common design pattern referred to as "Dependency Injection", or "Inversion of Control" (IOC)
There are several third party tools on the market to help you implement this kind of pattern. One I have heard of is called "Rhino-Mock" or something like that...
Edit: Rob Wells. #Charles. Thanks for this. I'd forgotten using mock objects to completely replace using other classes except for the one under test.
A couple of other things I've remembered after you mentioning mock objects is that:
they can be used to simulate errors being returned by the included classes.
they can be used to raise specific exceptions to check exception handling in the class under test.
they can be used to simulate items where setup costs are high, e.g. a large SQL DB back end.
they can be used to verify the contents of an incoming request.
For more information, have a look at Martin Fowler's paper called "Mocks Aren't Stubs" and The Pragmatic Programmers's article "Mock Objects"
There is no reason why unit tests can't span multiple classes, or even submodules, as long as the test is treating only one consistent business operation. Think about "calculateWage", a method of a BO that uses different strategies to calculate the salary of a person. That's one unit test in my opinion.
I have heard of techniques in which many of the unit tests are done first, and development is done around them. Someone has just commented saying that this is "Test Driven Development" - TDD (Thanks Elie).
But if this is an offshore operation that's possibly going to charge you more money because they're spending time doing these unit tests - then I'd be careful. Get a second opinion from someone, experienced with unit tests who will verify that they're actually doing as they say.
From my understanding, unit testing will add a bit more time to any development project, but of course may offer some quality control. Nonetheless, this is the type of quality control I would want with an in house project. This may just be something the offshore company throws out there to give you a warm fuzzy.
There is a difference between the process you use to test and the technology that is used to support it. The various frameworks used for unit testing are generally very flexible and can be used for testing small units of code, large units and even testing entire processes. This flexibility can lead to confusion.
My recommendation is that whatever specific methodology or process you adopt that you segregate the various Unit Test into distinct assemblies or modules. The exact arrangement depends on your code and your company's organization.
The accumulative effect of using the Unit Test framework is that much of the testing of the code is automated. Adopted correctly developers can evaluate changes to the code code better with out going through a full Q&A process. As for the Q & A Process itself it makes their time more productive as the quality of the code coming out of development should be higher.
Understand it is not THE answer to all quality issues it just a useful tool like the other you use.
Wikipedia would seem to suggest that unit testing is about testing the smallest amount of code which would be a method on a class in the case of OO programming.
Some may have a more general term of what they mean by unit tests, where some may think of some integration tests as being unit tests where the unit is a mixture of components.
there is a traditional view of http://en.wikipedia.org/wiki/Software_testing as part of http://en.wikipedia.org/wiki/Software_engineering. but i like the idea of an agile unit test: a test is an agile unit test if it is fast enough so that the programmers always run it.
A unit test is the smallest and only piece of confidence you can get yourself on your way to being done. That is what matters, iteratively building a shield against regression and spec deviation, not how you actually integrate it to your Object-Oriented architecture.
This is almost a repeat of the "What is a 'Unit'?" question.
"Unit" can be defined flexibly. If their document doesn't have a definition of "unit", you'll need to clarify that.
It might be that they think of unit as a big assembly of code. Which is not the most desirable definition.
While I agree that you have several layers of testing (unit, module, package, application), I also think that much of this can be done with unit testing tools. Leading to "what is a unit?" questions coming up all the time.
Unit depends on context. For an individual developer, unit must be Class. Sometimes, it will also mean module or package.
For a team, however, their unit may be a package or a complete application.
What does it matter what we think? The issue here is your unhappiness with the terms they use in the document. Why don't you discuss it with them?
Ten years ago, before the current usage of "unit testing" as tests written in code, the same designation was applied to manual tests. I worked for a software development firm with a very formalized software development process. We had to write "unit tests" before writing any code. In that era, the unit tests were written in a text document (such as in Word). They described the exact steps that the user was to follow in using the app. For example, they described the exact input to type on the page to set up a new customer. Or, the user was to search for a particular product, and see that the displayed information matched the test document. So, the test was a script that the tester followed, where they also recorded the results. When the new incarnation of unit testing came along, it was confusing for a while trying to figure out if they meant the old, human tests or the new, coded tests.
I lead a group of offshore team too. Supposely we have a set of unit tests...but it doesn't mean much. :) So we rely much more on the functional and testers for quality. The inherit problem with unit testing is that you have perfect knowledge of the functionals, and you trust the developers. In the real world, that's hard to assume..

What is the difference between integration and unit tests?

I know the so-called textbook definition of unit tests and integration tests. What I am curious about is when it is time to write unit tests... I will write them to cover as many sets of classes as possible.
For example, if I have a Word class, I will write some unit tests for the Word class. Then, I begin writing my Sentence class, and when it needs to interact with the Word class, I will often write my unit tests such that they test both Sentence and Word... at least in the places where they interact.
Have these tests essentially become integration tests because they now test the integration of these 2 classes, or is it just a unit test that spans 2 classes?
In general, because of this uncertain line, I will rarely actually write integration tests... or is my using the finished product to see if all the pieces work properly the actual integration tests, even though they are manual and rarely repeated beyond the scope of each individual feature?
Am I misunderstanding integration tests, or is there really just very little difference between integration and unit tests?
The key difference, to me, is that integration tests reveal if a feature is working or is broken, since they stress the code in a scenario close to reality. They invoke one or more software methods or features and test if they act as expected.
On the opposite, a Unit test testing a single method relies on the (often wrong) assumption that the rest of the software is correctly working, because it explicitly mocks every dependency.
Hence, when a unit test for a method implementing some feature is green, it does not mean the feature is working.
Say you have a method somewhere like this:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
Log.TrackTheFactYouDidYourJob();
return someResults;
}
DoSomething is very important to your customer: it's a feature, the only thing that matters. That's why you usually write a Cucumber specification asserting it: you wish to verify and communicate the feature is working or not.
Feature: To be able to do something
In order to do something
As someone
I want the system to do this thing
Scenario: A sample one
Given this situation
When I do something
Then what I get is what I was expecting for
No doubt: if the test passes, you can assert you are delivering a working feature. This is what you can call Business Value.
If you want to write a unit test for DoSomething you should pretend (using some mocks) that the rest of the classes and methods are working (that is: that, all dependencies the method is using are correctly working) and assert your method is working.
In practice, you do something like:
public SomeResults DoSomething(someInput) {
var someResult = [Do your job with someInput];
FakeAlwaysWorkingLog.TrackTheFactYouDidYourJob(); // Using a mock Log
return someResults;
}
You can do this with Dependency Injection, or some Factory Method or any Mock Framework or just extending the class under test.
Suppose there's a bug in Log.DoSomething().
Fortunately, the Gherkin spec will find it and your end-to-end tests will fail.
The feature won't work, because Log is broken, not because [Do your job with someInput] is not doing its job. And, by the way, [Do your job with someInput] is the sole responsibility for that method.
Also, suppose Log is used in 100 other features, in 100 other methods of 100 other classes.
Yep, 100 features will fail. But, fortunately, 100 end-to-end tests are failing as well and revealing the problem. And, yes: they are telling the truth.
It's very useful information: I know I have a broken product. It's also very confusing information: it tells me nothing about where the problem is. It communicates me the symptom, not the root cause.
Yet, DoSomething's unit test is green, because it's using a fake Log, built to never break. And, yes: it's clearly lying. It's communicating a broken feature is working. How can it be useful?
(If DoSomething()'s unit test fails, be sure: [Do your job with someInput] has some bugs.)
Suppose this is a system with a broken class:
A single bug will break several features, and several integration tests will fail.
On the other hand, the same bug will break just one unit test.
Now, compare the two scenarios.
The same bug will break just one unit test.
All your features using the broken Log are red
All your unit tests are green, only the unit test for Log is red
Actually, unit tests for all modules using a broken feature are green because, by using mocks, they removed dependencies. In other words, they run in an ideal, completely fictional world. And this is the only way to isolate bugs and seek them. Unit testing means mocking. If you aren't mocking, you aren't unit testing.
The difference
Integration tests tell what's not working. But they are of no use in guessing where the problem could be.
Unit tests are the sole tests that tell you where exactly the bug is. To draw this information, they must run the method in a mocked environment, where all other dependencies are supposed to correctly work.
That's why I think that your sentence "Or is it just a unit test that spans 2 classes" is somehow displaced. A unit test should never span 2 classes.
This reply is basically a summary of what I wrote here: Unit tests lie, that's why I love them.
When I write unit tests I limit the scope of the code being tested to the class I am currently writing by mocking dependencies. If I am writing a Sentence class, and Sentence has a dependency on Word, I will use a mock Word. By mocking Word I can focus only on its interface and test the various behaviors of my Sentence class as it interacts with Word's interface. This way I am only testing the behavior and implementation of Sentence and not at the same time testing the implementation of Word.
Once I've written the unit tests to ensure Sentence behaves correctly when it interacts with Word based on Word's interface, then I write the integration test to make sure that my assumptions about the interactions were correct. For this I supply the actual objects and write a test that exercises a feature that will end up using both Sentence and Word.
My 10 bits :D
I was always told that Unit Tests is the testing of an individual component - which should be exercised to its fullest. Now, this tends to have many levels, since most components are made of smaller parts. For me, a unit is a functional part of the system. So it has to provide something of value (i.e. not a method for string parsing, but a HtmlSanitizer perhaps).
Integration Tests is the next step up, its taking one or more components and making sure they work together as they should.. You are then above the intricacies of worry about how the components work individually, but when you enter html into your HtmlEditControl , it somehow magically knows wether its valid or not..
Its a real movable line though.. I'd rather focus more on getting the damn code to work full stop ^_^
In unit test you test every part isolated:
in integration test you test many modules of your system:
and this what happens when you only use unit tests (generally both windows are working, unfortunately not together):
Sources:
source1
source2
Unit tests use mocks
The thing you're talking about are integration tests that actually test the whole integration of your system. But when you do unit testing you should actually test each unit separately. Everything else should be mocked. So in your case of Sentence class, if it uses Word class, then your Word class should be mocked. This way, you'll only test your Sentence class functionality.
I think when you start thinking about integration tests, you are speaking more of a cross between physical layers rather than logical layers.
For example, if your tests concern itself with generating content, it's a unit test: if your test concerns itself with just writing to disk, it's still a unit test, but once you test for both I/O AND the content of the file, then you have yourself an integration test. When you test the output of a function within a service, it's a unit-test, but once you make a service call and see if the function result is the same, then that's an integration test.
Technically you cannot unit test just-one-class anyway. What if your class is composed with several other classes? Does that automatically make it an integration test? I don't think so.
using Single responsibility design, its black and white. More than 1 responsibility, its an integration test.
By the duck test (looks, quacks, waddles, its a duck), its just a unit test with more than 1 newed object in it.
When you get into mvc and testing it, controller tests are always integration, because the controller contains both a model unit and a view unit. Testing logic in that model, I would call a unit test.
In my opinion the answer is "Why does it matter?"
Is it because unit tests are something you do and integration tests are something you don't? Or vice versa? Of course not, you should try to do both.
Is it because unit tests need to be Fast, Isolated, Repeatable, Self-Validating and Timely and integration tests should not? Of course not, all tests should be these.
It is because you use mocks in unit tests but you don't use them in integration tests? Of course not. This would imply that if I have a useful integration test I am not allowed to add a mock for some part, fear I would have to rename my test to "unit test" or hand it over to another programmer to work on.
Is it because unit tests test one unit and integration tests test a number of units? Of course not. Of what practical importance is that? The theoretical discussion on the scope of tests breaks down in practice anyway because the term "unit" is entirely context dependent. At the class level, a unit might be a method. At an assembly level, a unit might be a class, and at the service level, a unit might be a component.
And even classes use other classes, so which is the unit?
It is of no importance.
Testing is important, F.I.R.S.T is important, splitting hairs about definitions is a waste of time which only confuses newcomers to testing.
The nature of your tests
A unit test of module X is a test that expects (and checks for) problems only in module X.
An integration test of many modules is a test that expects problems that arise from the cooperation between the modules so that these problems would be difficult to find using unit tests alone.
Think of the nature of your tests in the following terms:
Risk reduction: That's what tests are for. Only a combination of unit tests and integration tests can give you full risk reduction, because on the one hand unit tests can inherently not test the proper interaction between modules and on the other hand integration tests can exercise the functionality of a non-trivial module only to a small degree.
Test writing effort: Integration tests can save effort because you may then not need to write stubs/fakes/mocks. But unit tests can save effort, too, when implementing (and maintaining!) those stubs/fakes/mocks happens to be easier than configuring the test setup without them.
Test execution delay: Integration tests involving heavyweight operations (such as access to external systems like DBs or remote servers) tend to be slow(er). This means unit tests can be executed far more frequently, which reduces debugging effort if anything fails, because you have a better idea what you have changed in the meantime. This becomes particularly important if you use test-driven development (TDD).
Debugging effort: If an integration test fails, but none of the unit tests does, this can be very inconvenient, because there is so much code involved that may contain the problem. This is not a big problem if you have previously changed only a few lines -- but as integration tests run slowly, you perhaps did not run them in such short intervals...
Remember that an integration test may still stub/fake/mock away some of its dependencies.
This provides plenty of middle ground between unit tests and system tests (the most comprehensive integration tests, testing all of the system).
Pragmatic approach to using both
So a pragmatic approach would be: Flexibly rely on integration tests as much as you sensibly can and use unit tests where this would be too risky or inconvenient.
This manner of thinking may be more useful than some dogmatic discrimination of unit tests and integration tests.
Unit Testing is a method of testing that verifies the individual units of source code are working properly.
Integration Testing is the phase of software testing in which individual software modules are combined and tested as a group.
Wikipedia defines a unit as the smallest testable part of an application, which in Java/C# is a method. But in your example of Word and Sentence class I would probably just write the tests for sentence since I would likely find it overkill to use a mock word class in order to test the sentence class. So sentence would be my unit and word is an implementation detail of that unit.
I think I would still call a couple of interacting classes a unit test provided that the unit tests for class1 are testing class1's features, and the unit tests for class2 are testing its features, and also that they are not hitting the database.
I call a test an integration test when it runs through most of my stack and even hits the database.
I really like this question, because TDD discussion sometimes feels a bit too purist to me, and it's good for me to see some concrete examples.
I do the same - I call them all unit tests, but at some point I have a "unit test" that covers so much I often rename it to "..IntegrationTest" - just a name change only, nothing else changes.
I think there is a continuation from "atomic tests" (testing one tiny class, or a method) to unit tests (class level) and integration tests - and then functional test (which are normally covering a lot more stuff from the top down) - there doesn't seem to be a clean cut off.
If your test sets up data, and perhaps loads a database/file etc, then perhaps its more of an integration test (integration tests I find use less mocks and more real classes, but that doesn't mean you can't mock out some of the system).
Integration tests: Database persistence is tested.
Unit tests: Database access is mocked. Code methods are tested.
Unit testing is testing against a unit of work or a block of code if you like. Usually performed by a single developer.
Integration testing refers to the test that is performed, preferably on an integration server, when a developer commits their code to a source control repository. Integration testing might be performed by utilities such as Cruise Control.
So you do your unit testing to validate that the unit of work you have built is working and then the integration test validates that whatever you have added to the repository didn't break something else.
Simple Explanation with Analogies
This answer will focus purely on examples.
Integration Tests
Integration tests check if everything is working together.
Unit Tests
They tell you whether one specific thing is working.
Examples
Consider a car:
Integration test for a car: e.g. does the car drive to Pondicherry and back? If so, the car as whole is working. If it fails, you won't really where where: radiator, transmission, engine, or carburettor?
Unit test for a car: Is the engine is working? This tests just the engine; nothing else. If this test fails, then you can be confident that there is a bug in the engine....This ties in closely with the concept of "fakes". You might need some keys in order to start the engine - except, you don't want to go to the hassle of actually an ignition (with a lock)...instead, you would hotwire the car to start it....in other words you would use a "fake" key.
Similarly, in unit testing, you would use "fakes" in order to make the engine work a particular way. And then you could simply test: "is it running".
I call unit tests those tests that white box test a class. Any dependencies that class requires is replaced with fake ones (mocks).
Integration tests are those tests where multiple classes and their interactions are tested at the same time. Only some dependencies in these cases are faked/mocked.
I wouldn't call Controller's integration tests unless one of their dependencies is a real one (i.e. not faked) (e.g. IFormsAuthentication).
Separating the two types of tests is useful for testing the system at different levels. Also, integration tests tend to be long lived, and unit tests are supposed to be quick. The execution speed distinction means they're executed differently. In our dev processes, unit tests are run at check-in (which is fine cos they're super quick), and integration tests are run once/twice per day. I try and run integration tests as often as possible, but usually hitting the database/writing to files/making rpc's/etc slows.
That raises another important point, unit tests should avoid hitting IO (e.g. disk, network, db). Otherwise they slow down alot. It takes a bit of effort to design these IO dependencies out - i can't admit I've been faithful to the "unit tests must be fast" rule, but if you are, the benefits on a much larger system become apparent very quickly.
A little bit academic this question, isn't it? ;-)
My point of view:
For me an integration test is the test of the whole part, not if two parts out of ten are going together.
Our integration test shows, if the master build (containing 40 projects) will succeed.
For the projects we have tons of unit tests.
The most important thing concerning unit tests for me is, that one unit test must not be dependent on another unit test. So for me both test you describe above are unit tests, if they are independent. For integration tests this need not to be important.
Have these tests essentially become integration tests because they now test the integration of these 2 classes? Or is it just a unit test that spans 2 classes?
I think Yes and Yes. Your unit test that spans 2 classes became an integration test.
You could avoid it by testing Sentence class with mock implementation - MockWord class, which is important when those parts of system are large enough to be implemented by different developers. In that case Word is unit tested alone, Sentence is unit tested with help of MockWord, and then Sentence is integration-tested with Word.
Exaple of real difference can be following
1) Array of 1,000,000 elements is easily unit tested and works fine.
2) BubbleSort is easily unit tested on mock array of 10 elements and also works fine
3) Integration testing shows that something is not so fine.
If these parts are developed by single person, most likely problem will be found while unit testing BubbleSoft just because developer already has real array and he does not need mock implementation.
In addition, it's important to remember that both unit tests and integration tests can be automated and written using, for example, JUnit.
In JUnit integration tests, one can use the org.junit.Assume class to test the availability of environment elements (e.g., database connection) or other conditions.
I get asked this a lot in interviews. Until now I'd ramble on pretentiously about my expertise and pontificate about component and acceptance testing.
For years I'd understood only integration and unit tests. I could, but didn't always bother to, write unit tests as a solo developer honing my skills.
Unit tests
That is a crucial difference. Unit tests are easy to implement and execute, requiring, ideally, no dependencies. That is what mocks are for. It is often easier to not mock everything, particularly where you gain coverage of other functions you wrote. Easier, maybe, but that isn't the idea of unit testing.
I'll reiterate, unit tests are meant to be easy to run and small. Their failure provides immediate insight into where a bug has been introduced.
Here is the hierarchy of tests, from cheap and plentiful at the bottom to slow, expensive, and few, at the top:
Several more layers can be conceptualised, but were omitted for clarity.
Integration tests
With integration tests you would consider bringing in serious external dependencies, such as VMs, virtual networks and appliances. Possibly you could use actual modems, routers, and firewalls where the expense was justified.
These wouldn't be run locally but on a build server. A mixture of local Jenkins and cloud based CI providers fulfil this need.
Other test terminology
That is my understanding that has served me for several years in industry. We could talk about component tests, and get a definition, but if the definition isn't in common circulation then it loses value.
Acceptance tests were what we would call business unit or customer requirements. These would lead the direction of everything and sit at the top of the pyramid (picture a dollar sign).
E2E, or end to end testing was used synonymously with integration tests, but I noticed online it is placed above. I guess it could have more relevance to acceptance tests the integration tests, which would tend to be more detailed with less interest from stakeholders (though immense interest internally in the department).
If you're a TDD purist, you write the tests before you write production code. Of course, the tests won't compile, so you first make the tests compile, then make the tests pass.
You can do this with unit tests, but you can't with integration or acceptance tests. If you tried with an integration test, nothing would ever compile until you've finished!