Efficiency pitfalls of doing both Integration and Acceptance testing (automated) [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
The advantages of unit-testing are obvious to me, they are done by developers themselves (either test or code-first) and are automated.
What I am a bit unsure about is whether developers should also do integration testing when the team already consists of a dedicated tester, who does automate as much as possible and does black box testing of the whole system (End-to-End test or more common termed Acceptance testing).
For short background more details:
Example Integration Test (MVC webapp)
Setup: Only the controller itself and the layers below controller are bootstrapped during test setup. Nothing is mocked or stubbed.
Test Entry: Bare Controller, most often Controllers entry points are methods with parameters(e.g. Spring MVC) and can be natively executed. No browser is involved during test fixture
Assert targets: Model data and View-name are asserted as direct outputs. Indirect outputs (e.g. data written to database) could be asserted also. The rendered payload (most often HTML) is ignored completely.
Example Acceptance Test (MVC webapp)
Setup: The whole webapp is bootstrapped (like it would be seen from end-user).
Test Entry: HTTP call itself. Browser can be involved as test executer (e.g. Selenium)
Assert Targets: The test output is the complete rendered response (HTML and other artifacts like javascript). Asserts on the database (e.g. data got inserted) can also be included.
Pitfalls double testing (both Integration + Acceptance)
I see major problems when including both test styles:
Controller tests are close to general system behaviour (e.g. submit login form, password validation, successful login). This is very close what an Acceptance test would do. In the end "double-testing" could happen, which is highly inefficient.
Controller are more white-boxed tests and tend to be brittle because they rely on many dependencies of lower layers (in difference to very fine grained unit-tests). Beause of this setting up maintaining Controller tests is high effort, Acceptance test where the whole application is started as black box is more trivial and have advantage being closer to production.
Above two points lead to my conclusion that if you're having good automation strategy of your tester you should skip Integration tests done by developers. They should more focus on unit-tests.
What do you think? Can you explain your test strategy? Do you have good/bad experiences including both test styles?
Thanks for reading my long question ;)
EDIT: Acceptance testing seems to be more common jargon as End-to-End so I switched the terms.

We do Acceptance TDD at my work.
When I first started I was told I could implement whatever policies I wanted so long as the work was completed in a timely and predictable fashion. Having done unit testing in the past I realized that one of the problem we always ran into were integration bugs. Some could take quite a long time to fix and were often a surprise. We would run into subtle bugs we introduced while extending the app's functionality.
I decide to avoid those issue I had run into in the past by focusing more on the the end result features that we were suppose to deliver. We would write tests that tested the acceptance behavior, not just at the unit level, but at the whole system level. I wanted to do that because at the end of the day I don't care of the unit works correctly, I care that the entire system works correctly. We found the following benefits to doing automated acceptance tests.
We NEVER regress end user functionality because it is explicitly tested for.
Refactors are easier because we don't have to update a bunch of unit tests. We just have to make sure our acceptance test still passes.
The integration of the "units" are implicitly covered.
The tests become a very clear definition of required end user functionality.
Integration issues are exposed earlier and are less of a surprise.
Some of the trade offs to doing it this way
Tests can be more complex in terms of usage of mocks, stubs, fixtures, etc.
Tests are less useful for narrowing down which "unit" has the defect.
We also make our test suite runnable via a Continuous Integration server which tags and packages for deployment. It runs with every commit as with most CI setups.
With regard to your points/concerns:
Setup: The whole webapp is
bootstrapped (like it would be seen
from end-user).
One compromise we do tend to make is to run the test in the same process space ala unit tests. Our entry point is the top of the app stack. We don't bother to try and run the app as a server because that adds to the complexity and doesn't add much in terms of coverage.
Test Entry: HTTP call itself. Browser
can be involved as test executer (e.g.
Selenium)
All of our automated tests are driven by a simulating a HTTP GET, POST, PUT, or DELETE. We don't actually use a browser for this though, a call into the top of the app stack the way the particular HTTP call get's mapped in works just fine.
Assert Targets: The test output is the
complete rendered response (HTML and
other artifacts like javascript).
Asserts on the database (e.g. data got
inserted) can also be included.
I think this where automated acceptance tests really shine. What you assert is the end user functionality you want to guarantee that you are implementing.
Controller tests are close to general
system behaviour (e.g. submit login
form, password validation, successful
login). This is very close what an
End-to-End test would do. In the end
"double-testing" could happen, which
is highly inefficient.
We actually do very little unit testing and rely almost solely on our automated acceptance tests. As a result we don't have much in the way of double testing.
Controller are more white-boxed tests
and tend to be brittle because they
rely on many dependencies of lower
layers (in difference to very fine
grained unit-tests). Beause of this
setting up maintaining Controller
tests is high effort, End-to-End test
where the whole application is started
as black box is more trivial and have
advantage being closer to production.
They may have more dependencies, but those can be mitigated through the usage of mocks and fixtures. We also usually implement our test with 2 modes of execution. Unmanaged mode where the tests runs fully wired to the network, dbs, etc. And Managed mode where the test runs with the unmanaged resources mocked out. Although you are correct in your assertion that the tests can be alot more effort to create and maintain.

Developer should do integration tests of the part that he changed/implemented. Under integration tests, I meant that they should see if the functionality they implemented really works as expected. If you don't do this, how do you know that what you just finished really works? Unit tests by itself is not the final goal - it is the product that matters.
This should be done in order to speed up bugs finding. After all, integration tests takes long to execute (at least in my company because of complexity it takes 1-2 days to execute all integration tests). Finding bugs earlier is better then later.

Having integration tests (and, indeed, unit tests) that test behaviour that is also tested by a system test helps debugging, by narrowing the location of a defect. If your system has components A-B-C and fails a system test-case, but the assembly A-B passes a similar integration test-case, the defect is probably in component C.

Considering that this post is dealing with testing pitfalls, I would like to make you aware of my most recent book, Common System and Software Testing Pitfalls, which was published last month by Addison Wesley. It documents 92 testing pitfalls organized into 14 categories. Each pitfall includes description, potential applicability, characteristic symptoms, potential negative consequences, potential causes, and recommendations for avoiding the pitfall and climbing out if you have already fallen in. Check it out on Amazon.com at: http://www.amazon.com/Common-System-Software-Testing-Pitfalls/dp/0133748553/ref=la_B001HQ006A_1_1?s=books&ie=UTF8&qid=1389613893&sr=1-1

Related

Grails unit/intergration testing

I'm trying to add unit-test and integration-test for my grails application but I have some trouble how to distinguish between both and when to use unit or integration to test my controllers actions and services.
The tutorial I found online is not very clean. I can't find complete example to follow up.
Can you please share helpful topics?
I follow the following guidelines:
Try writing as many unit tests as you can. They can be written for controllers, services, domain classes or any other groovy classes. The idea is that unit tests are friends for developers. Writing enough unit tests will
make sure that the developer makes lesser mistakes. As they execute
quickly, this means quick verification. But unit tests cannot test the following:
Criteria queries, HQL queries
Actual database Interactions (queries, transactional behaviour, updates, db constraints etc.)
Inter modular interactions
So we write the Integration tests as well
Integration tests take longer to execute. Writing Integration tests often need bootstrapping data. But they really are helpful to test functionalities end to end (excluding the actual user interactions through UI for which functional tests are written). So Integration Tests can be written for:
Testing all database interactions as unit tests actually do not test the database interactions. This also includes testing criteria, hql etc.,
Testing transactional behaviour (which is dependent on db)
Testing implementations end to end. So this will also test how two independently created modules interact with each other and make sure we have created them correctly.
One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.
Important links:
http://spockframework.org/spock/docs/1.0/interaction_based_testing.html
http://docs.grails.org/latest/guide/testing.html
Unfortunately it is not just a matter preference or speed. It is a huge subject, but I can give you some advice based on my experience.
If you expect to be covering your database access code (queries, transactional behaviour) by using unit tests, you are deluding yourself. You are testing how your queries comply with the in-memory implementation of GORM. Not hibernate, not your database.
I usually have two types of tests. Unit and functional tests. The functional tests will perform a full test, running against a real database, and stimulating the system like a user would (if it is a web site via Geb, if it is a REST api, via a REST client).
The functional tests will set up a startup state by executing some kind of fixture code first. This can be registering a user and logging them in, for example. Then the test will run, and then the postconditions are checked. Here, you can check the postconditions either by accessing the database through the GORM API, or by using production API calls (danger of covering a bug with another bug).
Sometimes, your system will interact with a third system. Here, if you can, you can mock the implementation of the third system, by injecting a mock implementation into the system under test.
You have also tools like Spring Cloud Contract, that allow you to create bock a mock server for your system under test, and a specification for your third-party system. See https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html
The unit tests, I use to thoroughly test all execution paths of a given class. I will try to trigger all exception states, all secondary scenarios, to make sure that everything is covered. I don't think it is realistic to have 100% coverage by using functional or integration tests.

Explanation of the differences between testing tools in PlayFramework 2 (WithApplication, WithServer, WithBrowser, InMemory etc...)

I am new to web application development, and even more so with Play Framework. My goal is to ensure my application is well tested, following Test Driven Development principles.
Play provides in its docs several means of testing a Play application, and often times I have difficulty in deciding which kinds of tests I should do, and which ones I can do without.
1) testing controllers vs WithApplication vs WithServer
option 1 is to test controllers as plain unit test
option 2 is to test the route using WithApplication and FakeRequest (knowing that the route calls the controller function, this approach feels more complete than option 1)
option 3 is to use WithServer with WS to make a request and await a response (this feels very similar to option 2, except it's using a real server)
Is testing with option 3 just a redundancy over testing with option 2? Can one be discarded in favor of the other?
2) in memory DB vs real DB
the in-memory DB (H2) does not seem to support some Postgres functionalities
testing against in-memory DB does not reflect a connection to a real database
Following the reasons above, I feel like testing with in-memory DB can result in uncaught bugs. Now, I understand that using a real DB is no longer called unit testing, as there are external dependencies. But is unit testing really something we want in this case?
3) WithBrowser (Selenium)
The advantages of this approach are clear, and likely irreplaceable (right?)
Seems like i am missing something when it comes to testing web applications, and clarification would be greatly appreciated.
WithApplication is for testing with a Play application. It's not strictly needed for testing routing/invoking controllers etc, they can all be tested without a running application (except for when they can't - some things rely on global state, but this is something that we are gradually fixing in Play). WithApplication I think is useful for when you want to test all your components working together. By using WithApplication, you let Play instantiate and wire everything together for you, which may be a lot easier than setting it up manually yourself in your tests.
WithServer has a number of interesting use cases. For one, it's more thorough integration testing than WithApplication, if you invoke a controller with a fake request, a lot of short cuts are taken, whereas invoking a controller with a real request over the wire doesn't take any shortcuts. Another interesting use case is testing HTTP client code - you may want to make sure that your HTTP client actually makes HTTP requests that make sense, so you setup some mock controllers with a mock router, and run them with WithServer. Finally, WithServer may be useful if you want to test an actual client to a REST API that you've written, talking to the actual service.
Whether you use an in memory database or a real database for testing is a question of hot debate, and Play is not opinionated here, it gives you the necessary tools for doing both. Some people like to use database abstractions tools, and keep their database access database agnostic. The motivations for this can be wide and varied, and certainly one that comes to play is so that unit testing can be done with in memory databases. Testing with in memory databases offers a lot of advantages, you can instantiate a new database for every test, ensuring test isolation - this is the biggest problem I've seen with running tests against a real database. You can also run your tests in parallel, they are usually faster, and they can run on any platform without any infrastructure setup. Of course, testing against a different database to production does open the possibility for bugs to slip through - but then, anything short of testing every permutation of every possible input and output opens the possibility for bugs to slip through, so all testing is imperfect at best, and a balance has to be achieved between test coverage and convenience of writing and maintainability of tests. So, for some, the advantages of testing against an in memory database outweighs the disadvantages. And then of course, there's people that like to take advantage of database specific features, for these, in memory database testing will be impossible. It's not hard to write test code against a real database in Play, I've done it a lot.

UI testing vs unit testing

what is the different purpose of those both? I mean, in which condition I should do each of them?
as for the example condition. if you have the backend server and several front-end webs, which one you'll do?do-unit testing the backend server first or do-UI testing in the web UI first?
given the condition, the server and the front-end webs already exist, so it's not an iterative design to build along with (TDD)...
Unit testing aims to test small portions of your code (individual classes / methods) in isolation from the rest of the world.
UI testing may be a different name for system / functional / acceptance testing, where you test the whole system together to ensure it does what it is supposed to do under real life circumstances. (Unless by UI testing you mean usability / look & feel etc. testing, which is typically constrained to details on the UI.)
You need both of these in most of projects, but at different times: unit testing during development (ideally from the very beginning, TDD style), and UI testing somewhat later, once you actually have some complete end-to-end functionality to test.
If you already have the system running, but no tests, practically you have legacy code. Strive to get the best test coverage achievable with the least effort first, which means high level functional tests. Adding unit tests is needed too, but it takes much more effort and starts to pay back later.
Recommended reading: Working Effectively with Legacy Code.
Unit test should always be done. Unittests are there to provide proof that each UNIT (read: object) of your technical solution delivers the expected results. To put it very (maybe too) simple, user testing is there to verify that your system fulfills the needs and demands of the user.
Test pyramid [1] is important concept here, well described by Martin Fawler.
In short, tests that run end-to-end through the UI are: brittle and expensive to write. You may consider test recording tools [2] to speed recording and re-recording up. Disclaimer - I'm developer of such tool.
[1] https://martinfowler.com/articles/practical-test-pyramid.html
[2] https://anwendo.com
In addition to the accepted answer, today I just came up with this question of why not just programmatically trigger layout functions and then unit-test your logic around that as well?
The answer I got from a senior dev was: programmatically trigger layout functions will not be an absolute copy of the real user-experience. In the real world, the system will trigger many callbacks, like when the user of an app backgrounds or foregrounds the app. Obviously you can trigger such events manually and test again, but would you be sure you got all events in all sequences right?!
The real user-experience is one where user makes actual network calls, taps on screens, loads multiple screen on top of each other and at times you might get system callbacks. Callbacks which you forgot to mock that you didn't properly mock. In unit-tests you're mainly testing in isolation. In UI test, you setup the app, may have to login, etc. That stack you build is much more complex vs a unit-test. Hence it's better to not mix unit-testing with UI testing.

Integration testing - can it be done right?

I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.

What are the differences between unit tests, integration tests, smoke tests, and regression tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
What are unit tests, integration tests, smoke tests, and regression tests? What are the differences between them and which tools can I use for each of them?
For example, I use JUnit and NUnit for unit testing and integration testing. Are there any tools for the last two, smoke testing or regression testing?
Unit test: Specify and test one point of the contract of single method of a class. This should have a very narrow and well defined scope. Complex dependencies and interactions to the outside world are stubbed or mocked.
Integration test: Test the correct inter-operation of multiple subsystems. There is whole spectrum there, from testing integration between two classes, to testing integration with the production environment.
Smoke test (aka sanity check): A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up.
Smoke testing is both an analogy with electronics, where the first test occurs when powering up a circuit (if it smokes, it's bad!)...
... and, apparently, with plumbing, where a system of pipes is literally filled by smoke and then checked visually. If anything smokes, the system is leaky.
Regression test: A test that was written when a bug was fixed. It ensures that this specific bug will not occur again. The full name is "non-regression test". It can also be a test made prior to changing an application to make sure the application provides the same outcome.
To this, I will add:
Acceptance test: Test that a feature or use case is correctly implemented. It is similar to an integration test, but with a focus on the use case to provide rather than on the components involved.
System test: Tests a system as a black box. Dependencies on other systems are often mocked or stubbed during the test (otherwise it would be more of an integration test).
Pre-flight check: Tests that are repeated in a production-like environment, to alleviate the 'builds on my machine' syndrome. Often this is realized by doing an acceptance or smoke test in a production like environment.
Unit test: an automatic test to test the internal workings of a class. It should be a stand-alone test which is not related to other resources.
Integration test: an automatic test that is done on an environment, so similar to unit tests but with external resources (db, disk access)
Regression test: after implementing new features or bug fixes, you re-test scenarios which worked in the past. Here you cover the possibility in which your new features break existing features.
Smoke testing: first tests on which testers can conclude if they will continue testing.
Everyone will have slightly different definitions, and there are often grey areas. However:
Unit test: does this one little bit (as isolated as possible) work?
Integration test: do these two (or more) components work together?
Smoke test: does this whole system (as close to being a production system as possible) hang together reasonably well? (i.e. are we reasonably confident it won't create a black hole?)
Regression test: have we inadvertently re-introduced any bugs we'd previously fixed?
A new test category I've just become aware of is the canary test. A canary test is an automated, non-destructive test that is run on a regular basis in a live environment, such that if it ever fails, something really bad has happened.
Examples might be:
Has data that should only ever be available in development/testy appeared live?
Has a background process failed to run?
Can a user logon?
Answer from one of the best websites for software testing techniques:
Types of software testing – complete list click here
It's quite a long description, and I'm not going to paste it here: but it may be helpful for someone who wants to know all the testing techniques.
Unit test: Verifying that particular component (i.e., class) created or modified functions as designed. This test can be manual or automated, but it does not move beyond the boundary of the component.
Integration test: Verifying that the interaction of particular components function as designed. Integration tests can be performed at the unit level or the system level. These tests can be manual or automated.
Regression test: Verifying that new defects are not introduced into existing code. These tests can be manual or automated.
Depending upon your SDLC (waterfall, RUP, agile, etc.) particular tests may be performed in 'phases' or may all be performed, more or less, at the same time. For example, unit testing may be limited to developers who then turn the code over to testers for integration and regression testing. However, another approach might have developers doing unit testing and some level of integration and regression testing (using a TDD approach along with continuous integration and automated unit and regression tests).
The tool set will depend largely on the codebase, but there are many open source tools for unit testing (JUnit). HP's (Mercury) QTP or Borland's Silk Test are both tools for automated integration and regression testing.
Unit test: testing of an individual module or independent component in an application is known to be unit testing. The unit testing will be done by the developer.
Integration test: combining all the modules and testing the application to verify the communication and the data flow between the modules are working properly or not. This testing also performed by developers.
Smoke test In a smoke test they check the application in a shallow and wide manner. In smoke testing they check the main functionality of the application. If there is any blocker issue in the application they will report to developer team, and the developing team will fix it and rectify the defect, and give it back to the testing team. Now testing team will check all the modules to verify that changes made in one module will impact the other module or not. In smoke testing the test cases are scripted.
Regression testing executing the same test cases repeatedly to ensure tat the unchanged module does not cause any defect. Regression testting comes under functional testing
REGRESSION TESTING-
"A regression test re-runs previous tests against the changed software to ensure that the changes made in the current software do not affect the functionality of the existing software."
I just wanted to add and give some more context on why we have these levels of test, what they really mean with examples
Mike Cohn in his book “Succeeding with Agile” came up with the “Testing Pyramid” as a way to approach automated tests in projects. There are various interpretations of this model. The model explains what kind of automated tests need to be created, how fast they can give feedback on the application under test and who writes these tests.
There are basically 3 levels of automated testing needed for any project and they are as follows.
Unit Tests-
These test the smallest component of your software application. This could literally be one function in a code which computes a value based on some inputs. This function is part of several other functions of the hardware/software codebase that makes up the application.
For example - Let’s take a web based calculator application. The smallest components of this application that needs to be unit tested could be a function that performs addition, another that performs subtraction and so on. All these small functions put together makes up the calculator application.
Historically developer writes these tests as they are usually written in the same programming language as the software application. Unit testing frameworks such as JUnit and NUnit (for java), MSTest (for C# and .NET) and Jasmine/Mocha (for JavaScript) are used for this purpose.
The biggest advantage of unit tests are, they run really fast underneath the UI and we can get quick feedback about the application. This should comprise more than 50% of your automated tests.
API/Integration Tests-
These test various components of the software system together. The components could include testing databases, API’s (Application Programming Interface), 3rd party tools and services along with the application.
For example - In our calculator example above, the web application may use a database to store values, use API’s to do some server side validations and it may use a 3rd party tool/service to publish results to the cloud to make it available across different platforms.
Historically a developer or technical QA would write these tests using various tools such as Postman, SoapUI, JMeter and other tools like Testim.
These run much faster than UI tests as they still run underneath the hood but may consume a little more time than unit tests as it has to check the communication between various independent components of the system and ensure they have seamless integration. This should comprise more that 30% of the automated tests.
UI Tests-
Finally, we have tests that validate the UI of the application. These tests are usually written to test end to end flows through the application.
For example - In the calculator application, an end to end flow could be, opening up the browser-> Entering the calculator application url -> Logging in with username/password -> Opening up the calculator application -> Performing some operations on the calculator -> verifying those results from the UI -> Logging out of the application. This could be one end to end flow that would be a good candidate for UI automation.
Historically, technical QA’s or manual testers write UI tests. They use open source frameworks like Selenium or UI testing platforms like Testim to author, execute and maintain the tests. These tests give more visual feedback as you can see how the tests are running, the difference between the expected and actual results through screenshots, logs, test reports.
The biggest limitation of UI tests is, they are relatively slow compared to Unit and API level tests. So, it should comprise only 10-20% of the overall automated tests.
The next two types of tests can vary based on your project but the idea is-
Smoke Tests
This can be a combination of the above 3 levels of testing. The idea is to run it during every code check in and ensure the critical functionalities of the system are still working as expected; after the new code changes are merged. They typically need to run with 5 - 10 mins to get faster feedback on failures
Regression Tests
They usually are run once a day at least and cover various functionalities of the system. They ensure the application is still working as expected. They are more details than the smoke tests and cover more scenarios of the application including the non-critical ones.
Integration testing: Integration testing is the integrate another element
Smoke testing: Smoke testing is also known as build version testing. Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing.
Regression testing: Regression testing is repeated testing. Whether new software is effected in another module or not.
Unit testing: It is a white box testing. Only developers involve in it
Unit testing is directed at the smallest part of the implementation possible. In Java this means you are testing a single class. If the class depends on other classes these are faked.
When your test calls more than one class, it's an integration test.
Full test suites can take a long time to run, so after a change many teams run some quick to complete tests to detect significant breakages. For example, you have broken the URIs to essential resources. These are the smoke tests.
Regression tests run on every build and allow you to refactor effectively by catching what you break. Any kind of test can be regression test, but I find unit tests are most helpful finding the source of fault.
Unit Testing
Unit testing is usually done by the developers side, whereas testers are partly evolved in this type of testing where testing is done unit by unit.
In Java JUnit test cases can also be possible to test whether the written code is perfectly designed or not.
Integration Testing:
This type of testing is possible after the unit testing when all/some components are integrated. This type of testing will make sure that when components are integrated, do they affect each others' working capabilities or functionalities?
Smoke Testing
This type of testing is done at the last when system is integrated successfully and ready to go on production server.
This type of testing will make sure that every important functionality from start to end is working fine and system is ready to deploy on production server.
Regression Testing
This type of testing is important to test that unintended/unwanted defects are not present in the system when developer fixed some issues.
This testing also make sure that all the bugs are successfully solved and because of that no other issues are occurred.
Smoke and sanity testing are both performed after a software build to identify whether to start testing. Sanity may or may not be executed after smoke testing. They can be executed separately or at the same time - sanity being immediately after smoke.
Because sanity testing is more in-depth and takes more time, in most cases it is well worth to be automated.
Smoke testing usually takes no longer than 5-30 minutes for execution. It is more general: it checks a small number of core functionalities of the whole system, in order to verify that the stability of the software is good enough for further testing and that there are no issues, blocking the run of the planned test cases.
Sanity testing is more detailed than smoke and may take from 15 minutes up to a whole day, depending on the scale of the new build. It is a more specialized type of acceptance testing, performed after progression or re-testing. It checks the core features of certain new functionalities and/or bug fixes together with some closely related to them features, in order to verify that they are functioning as to the required operational logic, before regression testing can be executed at a larger scale.
Unit Testing: It always performs by developer after their development done to find out issue from their testing side before they make any requirement ready for QA.
Integration Testing: It means tester have to verify module to sub module verification when some data/function output are drive to one module to other module. Or in your system if using third party tool which using your system data for integrate.
Smoke Testing: tester performed to verify system for high-level testing and trying to find out show stopper bug before changes or code goes live.
Regression Testing: Tester performed regression for verification of existing functionality due to changes implemented in system for newly enhancement or changes in system.
Regression test - Is a type of software testing where we try to cover or check around the bug fix. The functionality around the bug fix should not get changed or altered due to the fix provided. Issues found in such process are called as regression issues.
Smoke Testing: Is a kind of testing done to decide whether to accept the build/software for further QA testing.
There are some good answers already, but I would like further refine them:
Unit testing is the only form of white box testing here. The others are black box testing. White box testing means that you know the input; you know the inner workings of the mechanism and can inspect it and you know the output. With black box testing you only know what the input is and what the output should be.
So clearly unit testing is the only white box testing here.
Unit testing test specific pieces of code. Usually methods.
Integration testing test whether your new feature piece of software can integrate with everything else.
Regression testing. This is testing done to make sure you haven't broken anything. Everything that used to work should still work.
Smoke testing is done as a quick test to make sure everything looks okay before you get involved in the more vigorous testing.
Smoke tests have been explained here already and is simple. Regression tests come under integration tests.
Automated tests can be divided into just two.
Unit tests and integration tests (this is all that matters)
I would call use the phrase "long test" (LT) for all tests like integration tests, functional tests, regression tests, UI tests, etc. And unit tests as "short test".
An LT example could be, automatically loading a web page, logging in to the account and buying a book. If the test passes it is more likely to run on live site the same way(hence the 'better sleep' reference). Long = distance between web page (start) and database (end).
And this is a great article discussing the benefits of integration testing (long test) over unit testing.