Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If I have UI automation tests, why do I need to write unit tests?
If I need to check that a method returns some output for a given input, for example a result of an addition which is then displayed in a view, why do I need unit test if I can confirm that output in the view is correct (or not correct) through UI automation test
Unit test and end to end test (UI tests) have two different purposes
Unit test tell you when unit of code (module, class, function, interface) has an issue
End to end tests tell you how that failure affects end to end output.
Lets use an analogy to understand why we need both.
Suppose you were manufacturing a car by assembling different components like carburettor, gear box, tyres, crankshaft etc. All these parts are being made by different vendors(think developers).
When car fails to work as expected, will you need to test individual components to figure out where the problem originates from ?
Will testing components before assembling the car, make you save time and effort ?
Typically what you want to do is to make sure each component work as expected (unit tests) before you add them to your car.
When the car does not work as expected, you test each component to find the root cause of the problem.
This typically works by creating an assembly line (CI pipeline). Your testing strategy looks like
test individual components
test if they work when interfaced with other components
test the car once all components are assembled together.
This testing strategy is what we call a testing pyramid in programming.
Reading this might give you more insight : https://martinfowler.com/bliki/TestPyramid.html
Two reasons immediately come to mind as to why you would want unit tests despite having automation tests.
Unit tests make ruthless code refactoring a much less daunting challenge, and mitigate that risk
Unit tests provide invaluable documentation of the code, what each module does (automation tests don't give you this), when the code changes the unit tests change also unlike stale documentation in some wiki or doc that never gets updated later as code continues to change and evolve over time.
In addition to Nishants and James' answers: With UI/End-to-End tests it is much harder to test for certain error conditions.
First of all, you need to understand that unit test cases and user interface (UI) test automation are two different concepts. In unit test cases, you write test cases per unit and test them module by module---you're actually testing each module separately.
Test automation, on the other hand, covers end-to-end testing. It tests your end-to-end inputs and their respective outputs. Both have their own advantages, so you need to use both on your product to make sure it is bug-free. Let's better understand the need for unit tests with the help of an example:
You're building a chatting app. For the app, you're integrating different modules like login, registration, send and receive a message, message history etc. Now, suppose there are multiple developers working on this product: each developer has worked on a different module. In this scenario, you need to join all the modules into the system flow to make the complete product. When you integrate all the modules, you find that the product is not able to store messages. So, now you need to test each module independently because you can't tell which specific module didn't work.
To avoid such cases, it's better to test each module before merging it with the others. This is called unit testing. If unit testing is done correctly, you will get the bug immediately. Once all the unit test cases pass, you can finally start integrating modules.
Unit testing is generally executed through the use of an assembly line (CI pipeline). Your product usually works if you create a good testing strategy and write the best test cases. The flow is a bit like this:
Test individual modules
Start integrating and testing each functionality and see if it's working or not
Run UI automation test cases on the product once you have integrated all the modules
In the end, if all test cases pass, that means your system is ready to work flawlessly.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
The advantages of unit-testing are obvious to me, they are done by developers themselves (either test or code-first) and are automated.
What I am a bit unsure about is whether developers should also do integration testing when the team already consists of a dedicated tester, who does automate as much as possible and does black box testing of the whole system (End-to-End test or more common termed Acceptance testing).
For short background more details:
Example Integration Test (MVC webapp)
Setup: Only the controller itself and the layers below controller are bootstrapped during test setup. Nothing is mocked or stubbed.
Test Entry: Bare Controller, most often Controllers entry points are methods with parameters(e.g. Spring MVC) and can be natively executed. No browser is involved during test fixture
Assert targets: Model data and View-name are asserted as direct outputs. Indirect outputs (e.g. data written to database) could be asserted also. The rendered payload (most often HTML) is ignored completely.
Example Acceptance Test (MVC webapp)
Setup: The whole webapp is bootstrapped (like it would be seen from end-user).
Test Entry: HTTP call itself. Browser can be involved as test executer (e.g. Selenium)
Assert Targets: The test output is the complete rendered response (HTML and other artifacts like javascript). Asserts on the database (e.g. data got inserted) can also be included.
Pitfalls double testing (both Integration + Acceptance)
I see major problems when including both test styles:
Controller tests are close to general system behaviour (e.g. submit login form, password validation, successful login). This is very close what an Acceptance test would do. In the end "double-testing" could happen, which is highly inefficient.
Controller are more white-boxed tests and tend to be brittle because they rely on many dependencies of lower layers (in difference to very fine grained unit-tests). Beause of this setting up maintaining Controller tests is high effort, Acceptance test where the whole application is started as black box is more trivial and have advantage being closer to production.
Above two points lead to my conclusion that if you're having good automation strategy of your tester you should skip Integration tests done by developers. They should more focus on unit-tests.
What do you think? Can you explain your test strategy? Do you have good/bad experiences including both test styles?
Thanks for reading my long question ;)
EDIT: Acceptance testing seems to be more common jargon as End-to-End so I switched the terms.
We do Acceptance TDD at my work.
When I first started I was told I could implement whatever policies I wanted so long as the work was completed in a timely and predictable fashion. Having done unit testing in the past I realized that one of the problem we always ran into were integration bugs. Some could take quite a long time to fix and were often a surprise. We would run into subtle bugs we introduced while extending the app's functionality.
I decide to avoid those issue I had run into in the past by focusing more on the the end result features that we were suppose to deliver. We would write tests that tested the acceptance behavior, not just at the unit level, but at the whole system level. I wanted to do that because at the end of the day I don't care of the unit works correctly, I care that the entire system works correctly. We found the following benefits to doing automated acceptance tests.
We NEVER regress end user functionality because it is explicitly tested for.
Refactors are easier because we don't have to update a bunch of unit tests. We just have to make sure our acceptance test still passes.
The integration of the "units" are implicitly covered.
The tests become a very clear definition of required end user functionality.
Integration issues are exposed earlier and are less of a surprise.
Some of the trade offs to doing it this way
Tests can be more complex in terms of usage of mocks, stubs, fixtures, etc.
Tests are less useful for narrowing down which "unit" has the defect.
We also make our test suite runnable via a Continuous Integration server which tags and packages for deployment. It runs with every commit as with most CI setups.
With regard to your points/concerns:
Setup: The whole webapp is
bootstrapped (like it would be seen
from end-user).
One compromise we do tend to make is to run the test in the same process space ala unit tests. Our entry point is the top of the app stack. We don't bother to try and run the app as a server because that adds to the complexity and doesn't add much in terms of coverage.
Test Entry: HTTP call itself. Browser
can be involved as test executer (e.g.
Selenium)
All of our automated tests are driven by a simulating a HTTP GET, POST, PUT, or DELETE. We don't actually use a browser for this though, a call into the top of the app stack the way the particular HTTP call get's mapped in works just fine.
Assert Targets: The test output is the
complete rendered response (HTML and
other artifacts like javascript).
Asserts on the database (e.g. data got
inserted) can also be included.
I think this where automated acceptance tests really shine. What you assert is the end user functionality you want to guarantee that you are implementing.
Controller tests are close to general
system behaviour (e.g. submit login
form, password validation, successful
login). This is very close what an
End-to-End test would do. In the end
"double-testing" could happen, which
is highly inefficient.
We actually do very little unit testing and rely almost solely on our automated acceptance tests. As a result we don't have much in the way of double testing.
Controller are more white-boxed tests
and tend to be brittle because they
rely on many dependencies of lower
layers (in difference to very fine
grained unit-tests). Beause of this
setting up maintaining Controller
tests is high effort, End-to-End test
where the whole application is started
as black box is more trivial and have
advantage being closer to production.
They may have more dependencies, but those can be mitigated through the usage of mocks and fixtures. We also usually implement our test with 2 modes of execution. Unmanaged mode where the tests runs fully wired to the network, dbs, etc. And Managed mode where the test runs with the unmanaged resources mocked out. Although you are correct in your assertion that the tests can be alot more effort to create and maintain.
Developer should do integration tests of the part that he changed/implemented. Under integration tests, I meant that they should see if the functionality they implemented really works as expected. If you don't do this, how do you know that what you just finished really works? Unit tests by itself is not the final goal - it is the product that matters.
This should be done in order to speed up bugs finding. After all, integration tests takes long to execute (at least in my company because of complexity it takes 1-2 days to execute all integration tests). Finding bugs earlier is better then later.
Having integration tests (and, indeed, unit tests) that test behaviour that is also tested by a system test helps debugging, by narrowing the location of a defect. If your system has components A-B-C and fails a system test-case, but the assembly A-B passes a similar integration test-case, the defect is probably in component C.
Considering that this post is dealing with testing pitfalls, I would like to make you aware of my most recent book, Common System and Software Testing Pitfalls, which was published last month by Addison Wesley. It documents 92 testing pitfalls organized into 14 categories. Each pitfall includes description, potential applicability, characteristic symptoms, potential negative consequences, potential causes, and recommendations for avoiding the pitfall and climbing out if you have already fallen in. Check it out on Amazon.com at: http://www.amazon.com/Common-System-Software-Testing-Pitfalls/dp/0133748553/ref=la_B001HQ006A_1_1?s=books&ie=UTF8&qid=1389613893&sr=1-1
I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.
What is the difference between unit tests and functional tests? Can a unit test also test a function?
Unit tests tell a developer that the code is doing things right; functional tests tell a developer that the code is doing the right things.
You can read more at Unit Testing versus Functional Testing
A well explained real-life analogy of unit testing and functional testing can be described as follows,
Many times the development of a system is likened to the building of a house. While this analogy isn't quite correct, we can extend it for the purposes of understanding the difference between unit and functional tests.
Unit testing is analogous to a building inspector visiting a house's construction site. He is focused on the various internal systems of the house, the foundation, framing, electrical, plumbing, and so on. He ensures (tests) that the parts of the house will work correctly and safely, that is, meet the building code.
Functional tests in this scenario are analogous to the homeowner visiting this same construction site. He assumes that the internal systems will behave appropriately, that the building inspector is performing his task. The homeowner is focused on what it will be like to live in this house. He is concerned with how the house looks, are the various rooms a comfortable size, does the house fit the family's needs, are the windows in a good spot to catch the morning sun.
The homeowner is performing functional tests on the house. He has the user's perspective.
The building inspector is performing unit tests on the house. He has the builder's perspective.
As a summary,
Unit Tests are written from a programmers perspective. They are made to ensure that a particular method (or a unit) of a class performs a set of specific tasks.
Functional Tests are written from the user's perspective. They ensure that the system is functioning as users are expecting it to.
Unit Test - testing an individual unit, such as a method (function) in a class, with all dependencies mocked up.
Functional Test - AKA Integration Test, testing a slice of functionality in a system. This will test many methods and may interact with dependencies like Databases or Web Services.
A unit test tests an independent unit of behavior. What is a unit of behavior? It's the smallest piece of the system that can be independently unit tested. (This definition is actually circular, IOW it's really not a definition at all, but it seems to work quite well in practice, because you can sort-of understand it intuitively.)
A functional test tests an independent piece of functionality.
A unit of behavior is very small: while I absolutely dislike this stupid "one unit test per method" mantra, from a size perspective it is about right. A unit of behavior is something between a part of a method and maybe a couple of methods. At most an object, but not more than one.
A piece of functionality usually comprises many methods and cuts across several objects and often through multiple architectural layers.
A unit test would be something like: when I call the validate_country_code() function and pass it the country code 'ZZ' it should return false.
A functional test would be: when I fill out the shipping form with a country code of ZZ, I should be redirected to a help page which allows me to pick my country code out of a menu.
Unit tests are written by developers, for developers, from the developer's perspective.
Functional tests may be user facing, in which case they are written by developers together with users (or maybe with the right tools and right users even by the users themselves), for users, from the user's perspective. Or they may be developer facing (e.g. when they describe some internal piece of functionality that the user doesn't care about), in which case they are written by developers, for developers, but still from the user's perspective.
In the former case, the functional tests may also serve as acceptance tests and as an executable encoding of functional requirements or a functional specification, in the latter case, they may also serve as integration tests.
Unit tests change frequently, functional tests should never change within a major release.
TLDR:
To answer the question: Unit Testing is a subtype of Functional Testing.
There are two big groups: Functional and Non-Functional Testing. The best (non-exhaustive) illustration that I found is this one (source: www.inflectra.com):
(1) Unit Testing: testing of small snippets of code (functions/methods). It may be considered as (white-box) functional testing.
When functions are put together, you create a module = a standalone piece, possibly with a User Interface that can be tested (Module Testing). Once you have at least two separate modules, then you glue them together and then comes:
(2) Integration Testing: when you put two or more pieces of (sub)modules or (sub)systems together and see if they play nicely together.
Then you integrate the 3rd module, then the 4th and 5th in whatever order you or your team see fit, and once all the jigsaw pieces are placed together, comes
(3) System Testing: testing SW as a whole. This is pretty much "Integration testing of all pieces together".
If that's OK, then comes
(4) Acceptance Testing: did we build what the customer asked for actually? Of course, Acceptance Testing should be done throughout the lifecycle, not just at the last stage, where you realise that the customer wanted a sportscar and you built a van.
"Functional test" does not mean you are testing a function (method) in your code. It means, generally, that you are testing system functionality -- when I run foo file.txt at the command line, the lines in file.txt become reversed, perhaps. In contrast, a single unit test generally covers a single case of a single method -- length("hello") should return 5, and length("hi") should return 2.
See also IBM's take on the line between unit testing and functional testing.
According to ISTQB those two are not comparable. Functional testing is not integration testing.
Unit test is one of tests level and functional testing is type of testing.
Basically:
The function of a system (or component) is 'what it does'. This is
typically described in a requirements specification, a functional
specification, or in use cases.
while
Component testing, also known as unit, module and program testing,
searches for defects in, and verifies the functioning of software
(e.g. modules, programs, objects, classes, etc.) that are separately
testable.
According to ISTQB component/unit test can be functional or not-functional:
Component testing may include testing of functionality and specific non-functional characteristics such as resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing (e.g. decision coverage).
Quotes from Foundations of software testing - ISTQB certification
In Rails, the unit folder is meant to hold tests for your models, the functional folder is meant to hold tests for your controllers, and the integration folder is meant to hold tests that involve any number of controllers interacting. Fixtures are a way of organizing test data; they reside in the fixtures folder. The test_helper.rb file holds the default configuration for your tests.
u can visit this.
very simply we can say:
black box: user interface test like functional test
white box: code test like unit test
read more here.
AFAIK, unit testing is NOT functional testing. Let me explain with a small example. You want to test if the login functionality of an email web app is working or not, just as a user would. For that, your functional tests should be like this.
1- existing email, wrong password -> login page should show error "wrong password"!
2- non-existing email, any password -> login page should show error "no such email".
3- existing email, right password -> user should be taken to his inbox page.
4- no #symbol in email, right password -> login page should say "errors in form, please fix them!"
Should our functional tests check if we can login with invalid inputs ? Eg. Email has no # symbol, username has more than one dot (only one dot is permitted), .com appears before # etc. ? Generally, no ! That kind of testing goes into your unit tests.
You can check if invalid inputs are rejected inside unit tests as shown in the tests below.
class LoginInputsValidator
method validate_inputs_values(email, password)
1-If email is not like string.string#myapp.com, then throw error.
2-If email contains abusive words, then throw error.
3-If password is less than 10 chars, throw error.
Notice that the functional test 4 is actually doing what unit test 1 is doing. Sometimes, functional tests can repeat some (not all) of the testing done by unit tests, for different reasons. In our example, we use functional test 4 to check if a particular error message appears on entering invalid input. We don't want to test if all bad inputs are rejected or not. That is the job of unit tests.
The way I think of it is like this: A unit test establishes that the code does what you intended the code to do (e.g. you wanted to add parameter a and b, you in fact add them, and don't subtract them), functional tests test that all of the code works together to get a correct result, so that what you intended the code to do in fact gets the right result in the system.
UNIT TESTING
Unit testing includes testing of smallest unit of code which usually are functions or methods. Unit testing is mostly done by developer of unit/method/function, because they understand the core of a function. The main goal of the developer is to cover code by unit tests.
It has a limitation that some functions cannot be tested through unit tests. Even after the successful completion of all the unit tests; it does not guarantee correct operation of the product. The same function can be used in few parts of the system while the unit test was written only for one usage.
FUNCTIONAL TESTING
It is a type of Black Box testing where testing will be done on the functional aspects of a product without looking into the code. Functional testing is mostly done by a dedicated Software tester. It will include positive, negative and BVA techniques using un standardized data for testing the specified functionality of product. Test coverage is conducted in an improved manner by functional tests than by unit tests. It uses application GUI for testing, so it’s easier to determine what exactly a specific part of the interface is responsible for rather to determine what a code is function responsible for.
Test types
Unit testing - In Procedural programming unit is a procedure, in Object oriented programming unit is a class. Unit is isolated and reflects a developer perspective
Functional testing - more than Unit. User perspective, which describes a feature, use case, story...
Integration testing - check if all separately developed components work together. It can be other application, service, library, database, network etc.
Narrow integration test - double[About] is used. The main purpose is to check if component is configured in a right way
Broad integration test (End to End test, System test) - live version. The main purpose is to check if all components are configured in a right way
UI testing - checks if user input triggers a correct action and the UI is changed when some actions are happened
...
Non functional testing - other cases
Performance testing - calculate a speed and other metrics
Usability testing - UX
...
[iOS tests]
[Android tests]
Unit Test:-
Unit testing is particularly used to test the product component by component specially while the product is under development.
Junit and Nunit type of tools will also help you to test the product as per the Unit.
**Rather than solving the issues after the Integration it is always comfortable to get it resolved early in the development.
Functional Testing:-
As for as the Testing is concerned there are two main types of Testing as
1.Functional Test
2.Non-Functional Test.
Non-Functional Test is a test where a Tester will test that The product will perform all those quality attributes that customer doesn't mention but those quality attributes should be there.
Like:-Performance,Usability,Security,Load,Stress etc.
but in the Functional Test:- The customer is already present with his requirements and those are properly documented,The testers task is to Cross check that whether the Application Functionality is performing according to the Proposed System or not.
For that purpose Tester should test for the Implemented functionality with the proposed System.
Unit testing is usually done by developers. The objective of doing the same is to make sure their code works properly. General rule of thumb is to cover all the paths in code using unit testing.
Functional Testing: This is a good reference. Functional Testing Explanation
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
What are unit tests, integration tests, smoke tests, and regression tests? What are the differences between them and which tools can I use for each of them?
For example, I use JUnit and NUnit for unit testing and integration testing. Are there any tools for the last two, smoke testing or regression testing?
Unit test: Specify and test one point of the contract of single method of a class. This should have a very narrow and well defined scope. Complex dependencies and interactions to the outside world are stubbed or mocked.
Integration test: Test the correct inter-operation of multiple subsystems. There is whole spectrum there, from testing integration between two classes, to testing integration with the production environment.
Smoke test (aka sanity check): A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up.
Smoke testing is both an analogy with electronics, where the first test occurs when powering up a circuit (if it smokes, it's bad!)...
... and, apparently, with plumbing, where a system of pipes is literally filled by smoke and then checked visually. If anything smokes, the system is leaky.
Regression test: A test that was written when a bug was fixed. It ensures that this specific bug will not occur again. The full name is "non-regression test". It can also be a test made prior to changing an application to make sure the application provides the same outcome.
To this, I will add:
Acceptance test: Test that a feature or use case is correctly implemented. It is similar to an integration test, but with a focus on the use case to provide rather than on the components involved.
System test: Tests a system as a black box. Dependencies on other systems are often mocked or stubbed during the test (otherwise it would be more of an integration test).
Pre-flight check: Tests that are repeated in a production-like environment, to alleviate the 'builds on my machine' syndrome. Often this is realized by doing an acceptance or smoke test in a production like environment.
Unit test: an automatic test to test the internal workings of a class. It should be a stand-alone test which is not related to other resources.
Integration test: an automatic test that is done on an environment, so similar to unit tests but with external resources (db, disk access)
Regression test: after implementing new features or bug fixes, you re-test scenarios which worked in the past. Here you cover the possibility in which your new features break existing features.
Smoke testing: first tests on which testers can conclude if they will continue testing.
Everyone will have slightly different definitions, and there are often grey areas. However:
Unit test: does this one little bit (as isolated as possible) work?
Integration test: do these two (or more) components work together?
Smoke test: does this whole system (as close to being a production system as possible) hang together reasonably well? (i.e. are we reasonably confident it won't create a black hole?)
Regression test: have we inadvertently re-introduced any bugs we'd previously fixed?
A new test category I've just become aware of is the canary test. A canary test is an automated, non-destructive test that is run on a regular basis in a live environment, such that if it ever fails, something really bad has happened.
Examples might be:
Has data that should only ever be available in development/testy appeared live?
Has a background process failed to run?
Can a user logon?
Answer from one of the best websites for software testing techniques:
Types of software testing – complete list click here
It's quite a long description, and I'm not going to paste it here: but it may be helpful for someone who wants to know all the testing techniques.
Unit test: Verifying that particular component (i.e., class) created or modified functions as designed. This test can be manual or automated, but it does not move beyond the boundary of the component.
Integration test: Verifying that the interaction of particular components function as designed. Integration tests can be performed at the unit level or the system level. These tests can be manual or automated.
Regression test: Verifying that new defects are not introduced into existing code. These tests can be manual or automated.
Depending upon your SDLC (waterfall, RUP, agile, etc.) particular tests may be performed in 'phases' or may all be performed, more or less, at the same time. For example, unit testing may be limited to developers who then turn the code over to testers for integration and regression testing. However, another approach might have developers doing unit testing and some level of integration and regression testing (using a TDD approach along with continuous integration and automated unit and regression tests).
The tool set will depend largely on the codebase, but there are many open source tools for unit testing (JUnit). HP's (Mercury) QTP or Borland's Silk Test are both tools for automated integration and regression testing.
Unit test: testing of an individual module or independent component in an application is known to be unit testing. The unit testing will be done by the developer.
Integration test: combining all the modules and testing the application to verify the communication and the data flow between the modules are working properly or not. This testing also performed by developers.
Smoke test In a smoke test they check the application in a shallow and wide manner. In smoke testing they check the main functionality of the application. If there is any blocker issue in the application they will report to developer team, and the developing team will fix it and rectify the defect, and give it back to the testing team. Now testing team will check all the modules to verify that changes made in one module will impact the other module or not. In smoke testing the test cases are scripted.
Regression testing executing the same test cases repeatedly to ensure tat the unchanged module does not cause any defect. Regression testting comes under functional testing
REGRESSION TESTING-
"A regression test re-runs previous tests against the changed software to ensure that the changes made in the current software do not affect the functionality of the existing software."
I just wanted to add and give some more context on why we have these levels of test, what they really mean with examples
Mike Cohn in his book “Succeeding with Agile” came up with the “Testing Pyramid” as a way to approach automated tests in projects. There are various interpretations of this model. The model explains what kind of automated tests need to be created, how fast they can give feedback on the application under test and who writes these tests.
There are basically 3 levels of automated testing needed for any project and they are as follows.
Unit Tests-
These test the smallest component of your software application. This could literally be one function in a code which computes a value based on some inputs. This function is part of several other functions of the hardware/software codebase that makes up the application.
For example - Let’s take a web based calculator application. The smallest components of this application that needs to be unit tested could be a function that performs addition, another that performs subtraction and so on. All these small functions put together makes up the calculator application.
Historically developer writes these tests as they are usually written in the same programming language as the software application. Unit testing frameworks such as JUnit and NUnit (for java), MSTest (for C# and .NET) and Jasmine/Mocha (for JavaScript) are used for this purpose.
The biggest advantage of unit tests are, they run really fast underneath the UI and we can get quick feedback about the application. This should comprise more than 50% of your automated tests.
API/Integration Tests-
These test various components of the software system together. The components could include testing databases, API’s (Application Programming Interface), 3rd party tools and services along with the application.
For example - In our calculator example above, the web application may use a database to store values, use API’s to do some server side validations and it may use a 3rd party tool/service to publish results to the cloud to make it available across different platforms.
Historically a developer or technical QA would write these tests using various tools such as Postman, SoapUI, JMeter and other tools like Testim.
These run much faster than UI tests as they still run underneath the hood but may consume a little more time than unit tests as it has to check the communication between various independent components of the system and ensure they have seamless integration. This should comprise more that 30% of the automated tests.
UI Tests-
Finally, we have tests that validate the UI of the application. These tests are usually written to test end to end flows through the application.
For example - In the calculator application, an end to end flow could be, opening up the browser-> Entering the calculator application url -> Logging in with username/password -> Opening up the calculator application -> Performing some operations on the calculator -> verifying those results from the UI -> Logging out of the application. This could be one end to end flow that would be a good candidate for UI automation.
Historically, technical QA’s or manual testers write UI tests. They use open source frameworks like Selenium or UI testing platforms like Testim to author, execute and maintain the tests. These tests give more visual feedback as you can see how the tests are running, the difference between the expected and actual results through screenshots, logs, test reports.
The biggest limitation of UI tests is, they are relatively slow compared to Unit and API level tests. So, it should comprise only 10-20% of the overall automated tests.
The next two types of tests can vary based on your project but the idea is-
Smoke Tests
This can be a combination of the above 3 levels of testing. The idea is to run it during every code check in and ensure the critical functionalities of the system are still working as expected; after the new code changes are merged. They typically need to run with 5 - 10 mins to get faster feedback on failures
Regression Tests
They usually are run once a day at least and cover various functionalities of the system. They ensure the application is still working as expected. They are more details than the smoke tests and cover more scenarios of the application including the non-critical ones.
Integration testing: Integration testing is the integrate another element
Smoke testing: Smoke testing is also known as build version testing. Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing.
Regression testing: Regression testing is repeated testing. Whether new software is effected in another module or not.
Unit testing: It is a white box testing. Only developers involve in it
Unit testing is directed at the smallest part of the implementation possible. In Java this means you are testing a single class. If the class depends on other classes these are faked.
When your test calls more than one class, it's an integration test.
Full test suites can take a long time to run, so after a change many teams run some quick to complete tests to detect significant breakages. For example, you have broken the URIs to essential resources. These are the smoke tests.
Regression tests run on every build and allow you to refactor effectively by catching what you break. Any kind of test can be regression test, but I find unit tests are most helpful finding the source of fault.
Unit Testing
Unit testing is usually done by the developers side, whereas testers are partly evolved in this type of testing where testing is done unit by unit.
In Java JUnit test cases can also be possible to test whether the written code is perfectly designed or not.
Integration Testing:
This type of testing is possible after the unit testing when all/some components are integrated. This type of testing will make sure that when components are integrated, do they affect each others' working capabilities or functionalities?
Smoke Testing
This type of testing is done at the last when system is integrated successfully and ready to go on production server.
This type of testing will make sure that every important functionality from start to end is working fine and system is ready to deploy on production server.
Regression Testing
This type of testing is important to test that unintended/unwanted defects are not present in the system when developer fixed some issues.
This testing also make sure that all the bugs are successfully solved and because of that no other issues are occurred.
Smoke and sanity testing are both performed after a software build to identify whether to start testing. Sanity may or may not be executed after smoke testing. They can be executed separately or at the same time - sanity being immediately after smoke.
Because sanity testing is more in-depth and takes more time, in most cases it is well worth to be automated.
Smoke testing usually takes no longer than 5-30 minutes for execution. It is more general: it checks a small number of core functionalities of the whole system, in order to verify that the stability of the software is good enough for further testing and that there are no issues, blocking the run of the planned test cases.
Sanity testing is more detailed than smoke and may take from 15 minutes up to a whole day, depending on the scale of the new build. It is a more specialized type of acceptance testing, performed after progression or re-testing. It checks the core features of certain new functionalities and/or bug fixes together with some closely related to them features, in order to verify that they are functioning as to the required operational logic, before regression testing can be executed at a larger scale.
Unit Testing: It always performs by developer after their development done to find out issue from their testing side before they make any requirement ready for QA.
Integration Testing: It means tester have to verify module to sub module verification when some data/function output are drive to one module to other module. Or in your system if using third party tool which using your system data for integrate.
Smoke Testing: tester performed to verify system for high-level testing and trying to find out show stopper bug before changes or code goes live.
Regression Testing: Tester performed regression for verification of existing functionality due to changes implemented in system for newly enhancement or changes in system.
Regression test - Is a type of software testing where we try to cover or check around the bug fix. The functionality around the bug fix should not get changed or altered due to the fix provided. Issues found in such process are called as regression issues.
Smoke Testing: Is a kind of testing done to decide whether to accept the build/software for further QA testing.
There are some good answers already, but I would like further refine them:
Unit testing is the only form of white box testing here. The others are black box testing. White box testing means that you know the input; you know the inner workings of the mechanism and can inspect it and you know the output. With black box testing you only know what the input is and what the output should be.
So clearly unit testing is the only white box testing here.
Unit testing test specific pieces of code. Usually methods.
Integration testing test whether your new feature piece of software can integrate with everything else.
Regression testing. This is testing done to make sure you haven't broken anything. Everything that used to work should still work.
Smoke testing is done as a quick test to make sure everything looks okay before you get involved in the more vigorous testing.
Smoke tests have been explained here already and is simple. Regression tests come under integration tests.
Automated tests can be divided into just two.
Unit tests and integration tests (this is all that matters)
I would call use the phrase "long test" (LT) for all tests like integration tests, functional tests, regression tests, UI tests, etc. And unit tests as "short test".
An LT example could be, automatically loading a web page, logging in to the account and buying a book. If the test passes it is more likely to run on live site the same way(hence the 'better sleep' reference). Long = distance between web page (start) and database (end).
And this is a great article discussing the benefits of integration testing (long test) over unit testing.