What is the difference between unit tests and functional tests? - unit-testing

What is the difference between unit tests and functional tests? Can a unit test also test a function?

Unit tests tell a developer that the code is doing things right; functional tests tell a developer that the code is doing the right things.
You can read more at Unit Testing versus Functional Testing
A well explained real-life analogy of unit testing and functional testing can be described as follows,
Many times the development of a system is likened to the building of a house. While this analogy isn't quite correct, we can extend it for the purposes of understanding the difference between unit and functional tests.
Unit testing is analogous to a building inspector visiting a house's construction site. He is focused on the various internal systems of the house, the foundation, framing, electrical, plumbing, and so on. He ensures (tests) that the parts of the house will work correctly and safely, that is, meet the building code.
Functional tests in this scenario are analogous to the homeowner visiting this same construction site. He assumes that the internal systems will behave appropriately, that the building inspector is performing his task. The homeowner is focused on what it will be like to live in this house. He is concerned with how the house looks, are the various rooms a comfortable size, does the house fit the family's needs, are the windows in a good spot to catch the morning sun.
The homeowner is performing functional tests on the house. He has the user's perspective.
The building inspector is performing unit tests on the house. He has the builder's perspective.
As a summary,
Unit Tests are written from a programmers perspective. They are made to ensure that a particular method (or a unit) of a class performs a set of specific tasks.
Functional Tests are written from the user's perspective. They ensure that the system is functioning as users are expecting it to.

Unit Test - testing an individual unit, such as a method (function) in a class, with all dependencies mocked up.
Functional Test - AKA Integration Test, testing a slice of functionality in a system. This will test many methods and may interact with dependencies like Databases or Web Services.

A unit test tests an independent unit of behavior. What is a unit of behavior? It's the smallest piece of the system that can be independently unit tested. (This definition is actually circular, IOW it's really not a definition at all, but it seems to work quite well in practice, because you can sort-of understand it intuitively.)
A functional test tests an independent piece of functionality.
A unit of behavior is very small: while I absolutely dislike this stupid "one unit test per method" mantra, from a size perspective it is about right. A unit of behavior is something between a part of a method and maybe a couple of methods. At most an object, but not more than one.
A piece of functionality usually comprises many methods and cuts across several objects and often through multiple architectural layers.
A unit test would be something like: when I call the validate_country_code() function and pass it the country code 'ZZ' it should return false.
A functional test would be: when I fill out the shipping form with a country code of ZZ, I should be redirected to a help page which allows me to pick my country code out of a menu.
Unit tests are written by developers, for developers, from the developer's perspective.
Functional tests may be user facing, in which case they are written by developers together with users (or maybe with the right tools and right users even by the users themselves), for users, from the user's perspective. Or they may be developer facing (e.g. when they describe some internal piece of functionality that the user doesn't care about), in which case they are written by developers, for developers, but still from the user's perspective.
In the former case, the functional tests may also serve as acceptance tests and as an executable encoding of functional requirements or a functional specification, in the latter case, they may also serve as integration tests.
Unit tests change frequently, functional tests should never change within a major release.

TLDR:
To answer the question: Unit Testing is a subtype of Functional Testing.
There are two big groups: Functional and Non-Functional Testing. The best (non-exhaustive) illustration that I found is this one (source: www.inflectra.com):
(1) Unit Testing: testing of small snippets of code (functions/methods). It may be considered as (white-box) functional testing.
When functions are put together, you create a module = a standalone piece, possibly with a User Interface that can be tested (Module Testing). Once you have at least two separate modules, then you glue them together and then comes:
(2) Integration Testing: when you put two or more pieces of (sub)modules or (sub)systems together and see if they play nicely together.
Then you integrate the 3rd module, then the 4th and 5th in whatever order you or your team see fit, and once all the jigsaw pieces are placed together, comes
(3) System Testing: testing SW as a whole. This is pretty much "Integration testing of all pieces together".
If that's OK, then comes
(4) Acceptance Testing: did we build what the customer asked for actually? Of course, Acceptance Testing should be done throughout the lifecycle, not just at the last stage, where you realise that the customer wanted a sportscar and you built a van.

"Functional test" does not mean you are testing a function (method) in your code. It means, generally, that you are testing system functionality -- when I run foo file.txt at the command line, the lines in file.txt become reversed, perhaps. In contrast, a single unit test generally covers a single case of a single method -- length("hello") should return 5, and length("hi") should return 2.
See also IBM's take on the line between unit testing and functional testing.

According to ISTQB those two are not comparable. Functional testing is not integration testing.
Unit test is one of tests level and functional testing is type of testing.
Basically:
The function of a system (or component) is 'what it does'. This is
typically described in a requirements specification, a functional
specification, or in use cases.
while
Component testing, also known as unit, module and program testing,
searches for defects in, and verifies the functioning of software
(e.g. modules, programs, objects, classes, etc.) that are separately
testable.
According to ISTQB component/unit test can be functional or not-functional:
Component testing may include testing of functionality and specific non-functional characteristics such as resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing (e.g. decision coverage).
Quotes from Foundations of software testing - ISTQB certification

In Rails, the unit folder is meant to hold tests for your models, the functional folder is meant to hold tests for your controllers, and the integration folder is meant to hold tests that involve any number of controllers interacting. Fixtures are a way of organizing test data; they reside in the fixtures folder. The test_helper.rb file holds the default configuration for your tests.
u can visit this.

very simply we can say:
black box: user interface test like functional test
white box: code test like unit test
read more here.

AFAIK, unit testing is NOT functional testing. Let me explain with a small example. You want to test if the login functionality of an email web app is working or not, just as a user would. For that, your functional tests should be like this.
1- existing email, wrong password -> login page should show error "wrong password"!
2- non-existing email, any password -> login page should show error "no such email".
3- existing email, right password -> user should be taken to his inbox page.
4- no #symbol in email, right password -> login page should say "errors in form, please fix them!"
Should our functional tests check if we can login with invalid inputs ? Eg. Email has no # symbol, username has more than one dot (only one dot is permitted), .com appears before # etc. ? Generally, no ! That kind of testing goes into your unit tests.
You can check if invalid inputs are rejected inside unit tests as shown in the tests below.
class LoginInputsValidator
method validate_inputs_values(email, password)
1-If email is not like string.string#myapp.com, then throw error.
2-If email contains abusive words, then throw error.
3-If password is less than 10 chars, throw error.
Notice that the functional test 4 is actually doing what unit test 1 is doing. Sometimes, functional tests can repeat some (not all) of the testing done by unit tests, for different reasons. In our example, we use functional test 4 to check if a particular error message appears on entering invalid input. We don't want to test if all bad inputs are rejected or not. That is the job of unit tests.

The way I think of it is like this: A unit test establishes that the code does what you intended the code to do (e.g. you wanted to add parameter a and b, you in fact add them, and don't subtract them), functional tests test that all of the code works together to get a correct result, so that what you intended the code to do in fact gets the right result in the system.

UNIT TESTING
Unit testing includes testing of smallest unit of code which usually are functions or methods. Unit testing is mostly done by developer of unit/method/function, because they understand the core of a function. The main goal of the developer is to cover code by unit tests.
It has a limitation that some functions cannot be tested through unit tests. Even after the successful completion of all the unit tests; it does not guarantee correct operation of the product. The same function can be used in few parts of the system while the unit test was written only for one usage.
FUNCTIONAL TESTING
It is a type of Black Box testing where testing will be done on the functional aspects of a product without looking into the code. Functional testing is mostly done by a dedicated Software tester. It will include positive, negative and BVA techniques using un standardized data for testing the specified functionality of product. Test coverage is conducted in an improved manner by functional tests than by unit tests. It uses application GUI for testing, so it’s easier to determine what exactly a specific part of the interface is responsible for rather to determine what a code is function responsible for.

Test types
Unit testing - In Procedural programming unit is a procedure, in Object oriented programming unit is a class. Unit is isolated and reflects a developer perspective
Functional testing - more than Unit. User perspective, which describes a feature, use case, story...
Integration testing - check if all separately developed components work together. It can be other application, service, library, database, network etc.
Narrow integration test - double[About] is used. The main purpose is to check if component is configured in a right way
Broad integration test (End to End test, System test) - live version. The main purpose is to check if all components are configured in a right way
UI testing - checks if user input triggers a correct action and the UI is changed when some actions are happened
...
Non functional testing - other cases
Performance testing - calculate a speed and other metrics
Usability testing - UX
...
[iOS tests]
[Android tests]

Unit Test:-
Unit testing is particularly used to test the product component by component specially while the product is under development.
Junit and Nunit type of tools will also help you to test the product as per the Unit.
**Rather than solving the issues after the Integration it is always comfortable to get it resolved early in the development.
Functional Testing:-
As for as the Testing is concerned there are two main types of Testing as
1.Functional Test
2.Non-Functional Test.
Non-Functional Test is a test where a Tester will test that The product will perform all those quality attributes that customer doesn't mention but those quality attributes should be there.
Like:-Performance,Usability,Security,Load,Stress etc.
but in the Functional Test:- The customer is already present with his requirements and those are properly documented,The testers task is to Cross check that whether the Application Functionality is performing according to the Proposed System or not.
For that purpose Tester should test for the Implemented functionality with the proposed System.

Unit testing is usually done by developers. The objective of doing the same is to make sure their code works properly. General rule of thumb is to cover all the paths in code using unit testing.
Functional Testing: This is a good reference. Functional Testing Explanation

Related

SWTBot vs. Unit Testing

We use SWTBot for writing of functional tests. To test some cases is very difficult and some programmers use classes and their methods directly from implementation (for example call methods from class AddUserDialog etc.). Is this good approach? And why?
And next qustion please. It is SWTBot enough for testing of eclipse RCP based application? Is is necessary to write unit tests please?
Note: We are scrum team.
SWTBot and JUnit serve two different purposes.
JUnit
As the name implies, JUnit is meant for unit testing. Unit tests should be small and fast to execute. They test only a single unit of code and the above mentioned attributes allow them to be executed often while developing the unit under test.
But there is more to (good) units tests. You may want to read one of the following posts for further attributes of unit tests:
Key qualities of a good unit test
What attribute should a good Unit-Test have?
I would go one step further and say that unit tests only make sense in TDD, that is you write the test before the production code. Otherwise you neglect the tests. Who want's to do the extra effort of writing tests for something that already works. And even if you have the discipline to write the tests afterwards, they merely manifest the state of your production code. Whereas, like in TDD, writing tests beforehand leads to lean production code that only does what is required by the tests.
But I guess that's something not everyone will agree on.
In an RCP setting, unit tests would ideally be able to run without starting the platform (which takes rather long). I.e. they would not require to be run as PDE JUnit Tests but as plain JUnit Tests instead. Therefore the unit under test should be isolated from the RCP APIs.
On a related note, see also this question: How to efficiently JUnit test Eclipse RCP Plugins
SWTBot
While SWTBot uses the JUnit runtime to execute the tests, it is rather meant as a utility to create integration or functional tests. SWTBot, when used with RCP, starts the entire workbench and runs all tests within the same instance. Therefore great care should be taken to ensure that each test leaves the environment in the same state as it was before the test started. Specialized Rules may help here to set up and tear down a particular recurring scenario.
It is perfectly valid in order to setup an SWTBot test to call methods from your application. For example, you could programmatically open the wizard and then use SWTBot to simulate a user that enters data and presses the OK button. There is no need to use SWTBot to laboriously open the wizard itself.
In my experience, SWTBot is even too much for simple use cases. Consider a test that should enter some data into a dialog and then press OK. If you already have the dialog opened programmatically you can as well continue without SWTBot:
dialog.textField.setText( "data" );
dialog.okButton.notifyListeners( SWT.Selection, null );
assertThat( dialog.getEnteredData() ).isEqualTo( "data" );
Use Both
The best bet is to have both, unit tests that ensure the behavior of the respective units and functional tests that make sure that the particular units play together as desired.
Not sure if that answers the question, if you have further concerns please leave a comment.

What do you test with Unit tests?

I am new to unit testing. Suppose I am building a web application. How do I know what to test? All the examples that you see are some sort of basic sum function that really has no real value, or at least I've never written a function to add to inputs and then return!
So, my question...on a web application, what are the sort of things that need tested?
I know that this is a broad question but anything will be helpful. I would be interested in links or anything that gives real life examples as opposed to concept examples that don't have any real life usage.
Take a look at your code, especially the bits where you have complex logic with loops, conditionals, etc, and ask yourself: How do I know if this works?
If you need to change the complex logic to take into account other corner cases then how do you know that the changes you introduce don't break the existing cases? This is precisely what unit testing is intended to address.
So, to answer your question about how it applies to web applications: suppose you have some code that lays out the page differently depending on the browser. One of your customers refuses to upgrade from IE6 and insists that you support that. So you unit test your layout code by simulating the connection string from IE6 and checking that the layout is what you expect.
A customer tells you they've found a security hole where using a particular cookie will give you administrator access. How do you know that you've fixed the bug and it doesn't happen again? Create a unit test for it, and run the unit tests on a daily basis so that you get an early warning if it fails.
You discover a bug where users with accents in their names get corrupted in the database. Abstract out the webform input from the database layer and add unit tests to ensure that (eg) UTF8 encoded data is stored in the database correctly and can be retrieved.
You get the idea. Anywhere where part of the process has a well-defined input and output is ideal for unit testing. Anything that doesn't is ideal for refactoring until it is well defined. Take a look at projects such as WebUnit, HTMLUnit, XMLUnit, CSSUnit.
The first part of testing is to write testable applications. Separate out as much functionality as possible from the UI. Refactor into smaller methods. Learn about dependency injection, and try using that to create methods that can take simple, throw-away input that produces known (and therefor testable) results. Look at mocking tools.
Infrastructure and data layer code is easiest to test.
Look at behavior-driven testing as well as test-driven design. For my money, behavior testing is better than pure unit testing; you can follow use-cases, so that tests match against expected usage patterns.
Unit testing means testing any unit of work, the smallest units of work are methods and functions., The art of unit testing is to define tests for a function that cannot just be checked by inspection, what unit test aims at is to test every possible functional requirement of a method.
Consider for example you have a login function, then there could be following tests that you could write for failures:
1. Does the function fail on empty username and password
2. Does the function fail on the correct username but the wrong password
3. Does the function fail on the correct password but the wrong username
The you would also write tests that the function would pass:
1. Does the function pass on correct username and password
This is just a basic example but this is what unit testing attempts to achieve, testing out things that may have been overlooked during development.
Then there is a purist approach too where a developer is first supposed to write tests and then the code to pass those tests (aka test driven development).
Resources:
http://devzone.zend.com/article/2772
http://www.ibm.com/developerworks/library/j-mocktest.html
If you're new to TDD, may I suggest a quick trip into the world of BDD? My experience is that the language really helps people pick up TDD more quickly. Particularly, I point you at this article, in which Dan North suggests "what to test":
http://blog.dannorth.net/introducing-bdd/
Note for transparency: I may be heavily involved in the BDD movement.
Regarding the classes to unit test in a web-app, I'd consider starting with controllers, domain objects if they have complex behaviour, and anything called "service", "manager", "helper" or "util". Please also try renaming any classes like this so that they are less generic and actually say what they do. Classes called "calculator" or "converter" are also good candidates, and you'll probably find more in the same package / folder.
There are a couple of good books which could help you too:
Martin Fowler, "Refactoring"
Michael Feathers, "Working Effectively with Legacy Code"
Good luck!
If you start out saying, "How do I test my web app?" that is biting off a lot at once, and it's going to be hard to see unit testing as providing any kind of benefit. I got into unit testing by starting with small pieces that were isolated, then writing libraries test-first, and only then building whole applications that were testable.
Generally a web app has a domain model, it has data access objects that do queries on a database and return domain objects, it has services that call the data access objects, and it has controllers that accept http requests and call the services.
Tests for the controllers will check that they call the right service method with the right parameters. Service objects can be mocks injected during test setup.
Tests for the services will check that they call the right data access objects and perform whatever logic they need to be performing. Data access objects can be mocks injected during test setup.
Tests for the data access objects will check that they perform the right database operation (query or update or whatever) by checking the contents of the database before and after. For dao tests you'll need a database, and a tool like DBUnit to pre-populate it before the test. Also your domain objects' getters and setters will get exercised with this test so you won't need a separate test for them.
Tests for the domain model will check that whatever domain logic you have encoded in them works (Sometimes you may not have any). If you design your domain model so it is not coupled to the database then the more logic you put in the domain model the better because it's easy to test. You shouldn't need any mocks for these tests.
For a web app the kind of tests you need to do are slightly different. Unit tests are tests which test a particular component of your program. For a web app, you would need to test that forms accept/reject the right inputs, that all links point to the right place, that it can cope with unexpected inputs etc. I'd have a look at Selenium if I were you, I've used it extensively in testing a number of sites: Selenium HQ
I don't have experience of testing web apps, but speaking generally: you unit test the smallest 'chunks' of your program possible. That means you test each function on an individual basis. Anything on a larger scale becomes an integration test.
Of course, there are going to be methods so simple that its not worth your time to write a test for them, but on the whole aim to test as great a proportion of your code as possible.
A rule of thumb is that if it is not worth testing it is not worth writing.
However, some things are very difficult to test, so you have the do some cost benefit analysis on what you test. If you initially aim for 70% code coverage, you will be on the right track.

Integration testing - can it be done right?

I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.

Unit testing , approval testing and datafiles

(Leaving aside hair-splitting about if this is integration-testing or unit-testing.)
I would rather first test at the large scale. If my app writes a VRML file that is the same as the reference one then the VRML exporter works, I don't then have to run unit tests on every single statement in the code. I would also like to use this do some level of poor-man gui testing by comparing screenshots.
Is there a unit test framework (for C++ ideally) that integrates this sort of testing - or at least makes it easy to integrate with unit tests?
edit. It seems a better term is approval testing. So are there any other unit test frameworks that incorporate Approval Testing ?
Have a look at Approval Tests, written by a couple of friends of mine. Not C++, but it's the general idea of what you're after, also known as Golden Master tests. Good stuff, whether it's unit tests or not.
Kitware, for VTK, uses CDash to do most of its testing. Many of its tests are similar in nature to this - they write out an image of the rendered model, and compare it to a reference image.
In addition, they have code in there to specifically handle very subtle differences to the reference image due to different graphics card drivers/manufacturers/etc. The tests can be written in a way to compare the reference image with some tolerance.
Okay, I think you're making an incorrect assumption about the nature of unit test code; your statement that
If my app writes a VRML file that is
the same as the reference one then the
VRML exporter works, I don't then have
to run unit tests on every single
statement in the code.
is strictly correct if you're looking to do a validation test on your code, but note that this type of test is strictly different than what a unit test actually is. Unit tests are for testing individual units of code; they do not exist for verification purposes. Depending on your environment, you may not need unit tests at all, but please keep in mind that validation tests (testing the validity of the overall program output) and unit tests (testing that the individual code units work as expected) are completely different things.
(Note that I'm really not trying to be nitpicky about this; also, you can use plenty of unit test frameworks to achieve this result; keep in mind, though, that what you're writing aren't really "Unit Tests", despite running them in a Unit Test framework.)

What are the differences between unit tests, integration tests, smoke tests, and regression tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
What are unit tests, integration tests, smoke tests, and regression tests? What are the differences between them and which tools can I use for each of them?
For example, I use JUnit and NUnit for unit testing and integration testing. Are there any tools for the last two, smoke testing or regression testing?
Unit test: Specify and test one point of the contract of single method of a class. This should have a very narrow and well defined scope. Complex dependencies and interactions to the outside world are stubbed or mocked.
Integration test: Test the correct inter-operation of multiple subsystems. There is whole spectrum there, from testing integration between two classes, to testing integration with the production environment.
Smoke test (aka sanity check): A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up.
Smoke testing is both an analogy with electronics, where the first test occurs when powering up a circuit (if it smokes, it's bad!)...
... and, apparently, with plumbing, where a system of pipes is literally filled by smoke and then checked visually. If anything smokes, the system is leaky.
Regression test: A test that was written when a bug was fixed. It ensures that this specific bug will not occur again. The full name is "non-regression test". It can also be a test made prior to changing an application to make sure the application provides the same outcome.
To this, I will add:
Acceptance test: Test that a feature or use case is correctly implemented. It is similar to an integration test, but with a focus on the use case to provide rather than on the components involved.
System test: Tests a system as a black box. Dependencies on other systems are often mocked or stubbed during the test (otherwise it would be more of an integration test).
Pre-flight check: Tests that are repeated in a production-like environment, to alleviate the 'builds on my machine' syndrome. Often this is realized by doing an acceptance or smoke test in a production like environment.
Unit test: an automatic test to test the internal workings of a class. It should be a stand-alone test which is not related to other resources.
Integration test: an automatic test that is done on an environment, so similar to unit tests but with external resources (db, disk access)
Regression test: after implementing new features or bug fixes, you re-test scenarios which worked in the past. Here you cover the possibility in which your new features break existing features.
Smoke testing: first tests on which testers can conclude if they will continue testing.
Everyone will have slightly different definitions, and there are often grey areas. However:
Unit test: does this one little bit (as isolated as possible) work?
Integration test: do these two (or more) components work together?
Smoke test: does this whole system (as close to being a production system as possible) hang together reasonably well? (i.e. are we reasonably confident it won't create a black hole?)
Regression test: have we inadvertently re-introduced any bugs we'd previously fixed?
A new test category I've just become aware of is the canary test. A canary test is an automated, non-destructive test that is run on a regular basis in a live environment, such that if it ever fails, something really bad has happened.
Examples might be:
Has data that should only ever be available in development/testy appeared live?
Has a background process failed to run?
Can a user logon?
Answer from one of the best websites for software testing techniques:
Types of software testing – complete list click here
It's quite a long description, and I'm not going to paste it here: but it may be helpful for someone who wants to know all the testing techniques.
Unit test: Verifying that particular component (i.e., class) created or modified functions as designed. This test can be manual or automated, but it does not move beyond the boundary of the component.
Integration test: Verifying that the interaction of particular components function as designed. Integration tests can be performed at the unit level or the system level. These tests can be manual or automated.
Regression test: Verifying that new defects are not introduced into existing code. These tests can be manual or automated.
Depending upon your SDLC (waterfall, RUP, agile, etc.) particular tests may be performed in 'phases' or may all be performed, more or less, at the same time. For example, unit testing may be limited to developers who then turn the code over to testers for integration and regression testing. However, another approach might have developers doing unit testing and some level of integration and regression testing (using a TDD approach along with continuous integration and automated unit and regression tests).
The tool set will depend largely on the codebase, but there are many open source tools for unit testing (JUnit). HP's (Mercury) QTP or Borland's Silk Test are both tools for automated integration and regression testing.
Unit test: testing of an individual module or independent component in an application is known to be unit testing. The unit testing will be done by the developer.
Integration test: combining all the modules and testing the application to verify the communication and the data flow between the modules are working properly or not. This testing also performed by developers.
Smoke test In a smoke test they check the application in a shallow and wide manner. In smoke testing they check the main functionality of the application. If there is any blocker issue in the application they will report to developer team, and the developing team will fix it and rectify the defect, and give it back to the testing team. Now testing team will check all the modules to verify that changes made in one module will impact the other module or not. In smoke testing the test cases are scripted.
Regression testing executing the same test cases repeatedly to ensure tat the unchanged module does not cause any defect. Regression testting comes under functional testing
REGRESSION TESTING-
"A regression test re-runs previous tests against the changed software to ensure that the changes made in the current software do not affect the functionality of the existing software."
I just wanted to add and give some more context on why we have these levels of test, what they really mean with examples
Mike Cohn in his book “Succeeding with Agile” came up with the “Testing Pyramid” as a way to approach automated tests in projects. There are various interpretations of this model. The model explains what kind of automated tests need to be created, how fast they can give feedback on the application under test and who writes these tests.
There are basically 3 levels of automated testing needed for any project and they are as follows.
Unit Tests-
These test the smallest component of your software application. This could literally be one function in a code which computes a value based on some inputs. This function is part of several other functions of the hardware/software codebase that makes up the application.
For example - Let’s take a web based calculator application. The smallest components of this application that needs to be unit tested could be a function that performs addition, another that performs subtraction and so on. All these small functions put together makes up the calculator application.
Historically developer writes these tests as they are usually written in the same programming language as the software application. Unit testing frameworks such as JUnit and NUnit (for java), MSTest (for C# and .NET) and Jasmine/Mocha (for JavaScript) are used for this purpose.
The biggest advantage of unit tests are, they run really fast underneath the UI and we can get quick feedback about the application. This should comprise more than 50% of your automated tests.
API/Integration Tests-
These test various components of the software system together. The components could include testing databases, API’s (Application Programming Interface), 3rd party tools and services along with the application.
For example - In our calculator example above, the web application may use a database to store values, use API’s to do some server side validations and it may use a 3rd party tool/service to publish results to the cloud to make it available across different platforms.
Historically a developer or technical QA would write these tests using various tools such as Postman, SoapUI, JMeter and other tools like Testim.
These run much faster than UI tests as they still run underneath the hood but may consume a little more time than unit tests as it has to check the communication between various independent components of the system and ensure they have seamless integration. This should comprise more that 30% of the automated tests.
UI Tests-
Finally, we have tests that validate the UI of the application. These tests are usually written to test end to end flows through the application.
For example - In the calculator application, an end to end flow could be, opening up the browser-> Entering the calculator application url -> Logging in with username/password -> Opening up the calculator application -> Performing some operations on the calculator -> verifying those results from the UI -> Logging out of the application. This could be one end to end flow that would be a good candidate for UI automation.
Historically, technical QA’s or manual testers write UI tests. They use open source frameworks like Selenium or UI testing platforms like Testim to author, execute and maintain the tests. These tests give more visual feedback as you can see how the tests are running, the difference between the expected and actual results through screenshots, logs, test reports.
The biggest limitation of UI tests is, they are relatively slow compared to Unit and API level tests. So, it should comprise only 10-20% of the overall automated tests.
The next two types of tests can vary based on your project but the idea is-
Smoke Tests
This can be a combination of the above 3 levels of testing. The idea is to run it during every code check in and ensure the critical functionalities of the system are still working as expected; after the new code changes are merged. They typically need to run with 5 - 10 mins to get faster feedback on failures
Regression Tests
They usually are run once a day at least and cover various functionalities of the system. They ensure the application is still working as expected. They are more details than the smoke tests and cover more scenarios of the application including the non-critical ones.
Integration testing: Integration testing is the integrate another element
Smoke testing: Smoke testing is also known as build version testing. Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing.
Regression testing: Regression testing is repeated testing. Whether new software is effected in another module or not.
Unit testing: It is a white box testing. Only developers involve in it
Unit testing is directed at the smallest part of the implementation possible. In Java this means you are testing a single class. If the class depends on other classes these are faked.
When your test calls more than one class, it's an integration test.
Full test suites can take a long time to run, so after a change many teams run some quick to complete tests to detect significant breakages. For example, you have broken the URIs to essential resources. These are the smoke tests.
Regression tests run on every build and allow you to refactor effectively by catching what you break. Any kind of test can be regression test, but I find unit tests are most helpful finding the source of fault.
Unit Testing
Unit testing is usually done by the developers side, whereas testers are partly evolved in this type of testing where testing is done unit by unit.
In Java JUnit test cases can also be possible to test whether the written code is perfectly designed or not.
Integration Testing:
This type of testing is possible after the unit testing when all/some components are integrated. This type of testing will make sure that when components are integrated, do they affect each others' working capabilities or functionalities?
Smoke Testing
This type of testing is done at the last when system is integrated successfully and ready to go on production server.
This type of testing will make sure that every important functionality from start to end is working fine and system is ready to deploy on production server.
Regression Testing
This type of testing is important to test that unintended/unwanted defects are not present in the system when developer fixed some issues.
This testing also make sure that all the bugs are successfully solved and because of that no other issues are occurred.
Smoke and sanity testing are both performed after a software build to identify whether to start testing. Sanity may or may not be executed after smoke testing. They can be executed separately or at the same time - sanity being immediately after smoke.
Because sanity testing is more in-depth and takes more time, in most cases it is well worth to be automated.
Smoke testing usually takes no longer than 5-30 minutes for execution. It is more general: it checks a small number of core functionalities of the whole system, in order to verify that the stability of the software is good enough for further testing and that there are no issues, blocking the run of the planned test cases.
Sanity testing is more detailed than smoke and may take from 15 minutes up to a whole day, depending on the scale of the new build. It is a more specialized type of acceptance testing, performed after progression or re-testing. It checks the core features of certain new functionalities and/or bug fixes together with some closely related to them features, in order to verify that they are functioning as to the required operational logic, before regression testing can be executed at a larger scale.
Unit Testing: It always performs by developer after their development done to find out issue from their testing side before they make any requirement ready for QA.
Integration Testing: It means tester have to verify module to sub module verification when some data/function output are drive to one module to other module. Or in your system if using third party tool which using your system data for integrate.
Smoke Testing: tester performed to verify system for high-level testing and trying to find out show stopper bug before changes or code goes live.
Regression Testing: Tester performed regression for verification of existing functionality due to changes implemented in system for newly enhancement or changes in system.
Regression test - Is a type of software testing where we try to cover or check around the bug fix. The functionality around the bug fix should not get changed or altered due to the fix provided. Issues found in such process are called as regression issues.
Smoke Testing: Is a kind of testing done to decide whether to accept the build/software for further QA testing.
There are some good answers already, but I would like further refine them:
Unit testing is the only form of white box testing here. The others are black box testing. White box testing means that you know the input; you know the inner workings of the mechanism and can inspect it and you know the output. With black box testing you only know what the input is and what the output should be.
So clearly unit testing is the only white box testing here.
Unit testing test specific pieces of code. Usually methods.
Integration testing test whether your new feature piece of software can integrate with everything else.
Regression testing. This is testing done to make sure you haven't broken anything. Everything that used to work should still work.
Smoke testing is done as a quick test to make sure everything looks okay before you get involved in the more vigorous testing.
Smoke tests have been explained here already and is simple. Regression tests come under integration tests.
Automated tests can be divided into just two.
Unit tests and integration tests (this is all that matters)
I would call use the phrase "long test" (LT) for all tests like integration tests, functional tests, regression tests, UI tests, etc. And unit tests as "short test".
An LT example could be, automatically loading a web page, logging in to the account and buying a book. If the test passes it is more likely to run on live site the same way(hence the 'better sleep' reference). Long = distance between web page (start) and database (end).
And this is a great article discussing the benefits of integration testing (long test) over unit testing.