When to Unit Test and When to Integration Test - unit-testing

I am new to testing in general and am working on a Grails application.
I want to write a test that says "when this action is called, the correct view is returned". I don't know how to go about deciding if I should make something like this a unit test or an integration test. Either test would show me what I want - how do I decide?

One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds. Especially with mockDomain(). Especially with Grails 2.0 implementing criteria and named queries in unit tests.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.

From Grails Docs section 9.1
Unit testing are tests at the "unit" level. In other words you are
testing individual methods or blocks of code without considering for
surrounding infrastructure. In Grails you need to be particularity
aware of the difference between unit and integration tests because in
unit tests Grails does not inject any of the dynamic methods present
during integration tests and at runtime.
From Grails Docs section 9.2
Integration tests differ from unit tests in that you have full access
to the Grails environment within the test. Grails will use an
in-memory HSQLDB database for integration tests and clear out all the
data from the database in between each test.
What this means is that a unit test is completely isolated from the Grails environment whereas an integration test is not. According to Scott Davis, author of this article, it is acceptable to write only integration tests...
Unit vs. integration tests
As I mentioned earlier, Grails supports two basic types of tests: unit
and integration. There's no syntactical difference between the two —
both are written as a GroovyTestCase using the same assertions. The
difference is the semantics. A unit test is meant to test the class in
isolation, whereas the integration test allows you to test the class
in a full, running environment.
Quite frankly, if you want to write all of your Grails tests as
integration tests, that's just fine with me. All of the Grails
create-* commands generate corresponding integration tests, so most
folks simply use what is already there. As you'll see in just a
moment, most of the things you want to test require the full
environment to be up and running anyway, so integration tests are a
pretty good default. If you have noncore Grails classes that you'd
like to test, unit tests are perfectly fine.

First go through this chapter of the grails guide http://grails.org/doc/latest/guide/9.%20Testing.html
It talks about testing controllers and ability to get controller response like so :
controller.response.contentAsString
Now deciding on which test is more of an art rather than science. I prefer unit tests cause they are faster to run :)

Its a really interesting and challenging question to answer, but the truth is it really depends on what exactly you are testing.
Take the following test: "saving a book to the database". The hints are in the description. We are saying we need a book and we need a database, so in this case a unit test wont do because we need the integrated database.
My advice is write the full test description down and break it down like I did above. It will give you the hints to help you decide.
This is made easier with spock where you can use strings for test names.

Related

How does one write an automated integration test?

I read a lot about unit testing and integration testing, and the unit testing part I understand fairly well - isolate the object under test, and mock out dependencies using interfaces and inject those. (Or use seams to inject testable behavior).
However something that is still a mystery to me even after searching is integration testing. Every blog and link talks about testing various components together, do's and dont's, CI Servers, etc, but there's not much explanation in the way of how this is to be done.
Is an integration test automated? Or is this a manual effort? If it is automated, do I write this as code in the native language my app is in? How do I check or verify if the integration test is working as expected?
For example, if I have 4 services (a Socket Client, a Socket Server, a Database, and a Web Application) and I want to do some integration testing on how these 4 services interact with each other. How exactly how would I approach this? I know that some dummy data will be involved, but which part of my integration test is checking if the systems are working together correctly? This part is really unclear to me.
As you said about unit testing, you need to isolate the object under test, and mock out dependencies using interfaces and inject those...
Integration test is without isolating the unit... You can test two part or more (but you shouldn't have extensive integration test suits neither big test scenario). An example of integration test is testing your code with the database, you could need for that to initiate the database and clean it to have a repeatable tests.
You could have as well interface test (client execution and integration with the server for example) and so on.
Don't forget that this tests come with a downside, they are slower and harder to maintain, but they have different purpose than unit test, which is checking that units works as expected with real dependencies.
So to wrap-up if you write a test without totally isolating the unit it is an integration test, due to its nature it's better to test the logic of your code by unit test and reserve fewer integration tests when needed to test the interaction between the units.
You can also check this nice introduction https://softwareengineering.stackexchange.com/questions/48237/what-is-an-integration-test-exactly
An interesting definition from "the pragmatic programmer": integration tests show that the major parts of a system work well together

Grails unit/intergration testing

I'm trying to add unit-test and integration-test for my grails application but I have some trouble how to distinguish between both and when to use unit or integration to test my controllers actions and services.
The tutorial I found online is not very clean. I can't find complete example to follow up.
Can you please share helpful topics?
I follow the following guidelines:
Try writing as many unit tests as you can. They can be written for controllers, services, domain classes or any other groovy classes. The idea is that unit tests are friends for developers. Writing enough unit tests will
make sure that the developer makes lesser mistakes. As they execute
quickly, this means quick verification. But unit tests cannot test the following:
Criteria queries, HQL queries
Actual database Interactions (queries, transactional behaviour, updates, db constraints etc.)
Inter modular interactions
So we write the Integration tests as well
Integration tests take longer to execute. Writing Integration tests often need bootstrapping data. But they really are helpful to test functionalities end to end (excluding the actual user interactions through UI for which functional tests are written). So Integration Tests can be written for:
Testing all database interactions as unit tests actually do not test the database interactions. This also includes testing criteria, hql etc.,
Testing transactional behaviour (which is dependent on db)
Testing implementations end to end. So this will also test how two independently created modules interact with each other and make sure we have created them correctly.
One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.
Important links:
http://spockframework.org/spock/docs/1.0/interaction_based_testing.html
http://docs.grails.org/latest/guide/testing.html
Unfortunately it is not just a matter preference or speed. It is a huge subject, but I can give you some advice based on my experience.
If you expect to be covering your database access code (queries, transactional behaviour) by using unit tests, you are deluding yourself. You are testing how your queries comply with the in-memory implementation of GORM. Not hibernate, not your database.
I usually have two types of tests. Unit and functional tests. The functional tests will perform a full test, running against a real database, and stimulating the system like a user would (if it is a web site via Geb, if it is a REST api, via a REST client).
The functional tests will set up a startup state by executing some kind of fixture code first. This can be registering a user and logging them in, for example. Then the test will run, and then the postconditions are checked. Here, you can check the postconditions either by accessing the database through the GORM API, or by using production API calls (danger of covering a bug with another bug).
Sometimes, your system will interact with a third system. Here, if you can, you can mock the implementation of the third system, by injecting a mock implementation into the system under test.
You have also tools like Spring Cloud Contract, that allow you to create bock a mock server for your system under test, and a specification for your third-party system. See https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html
The unit tests, I use to thoroughly test all execution paths of a given class. I will try to trigger all exception states, all secondary scenarios, to make sure that everything is covered. I don't think it is realistic to have 100% coverage by using functional or integration tests.

SWTBot vs. Unit Testing

We use SWTBot for writing of functional tests. To test some cases is very difficult and some programmers use classes and their methods directly from implementation (for example call methods from class AddUserDialog etc.). Is this good approach? And why?
And next qustion please. It is SWTBot enough for testing of eclipse RCP based application? Is is necessary to write unit tests please?
Note: We are scrum team.
SWTBot and JUnit serve two different purposes.
JUnit
As the name implies, JUnit is meant for unit testing. Unit tests should be small and fast to execute. They test only a single unit of code and the above mentioned attributes allow them to be executed often while developing the unit under test.
But there is more to (good) units tests. You may want to read one of the following posts for further attributes of unit tests:
Key qualities of a good unit test
What attribute should a good Unit-Test have?
I would go one step further and say that unit tests only make sense in TDD, that is you write the test before the production code. Otherwise you neglect the tests. Who want's to do the extra effort of writing tests for something that already works. And even if you have the discipline to write the tests afterwards, they merely manifest the state of your production code. Whereas, like in TDD, writing tests beforehand leads to lean production code that only does what is required by the tests.
But I guess that's something not everyone will agree on.
In an RCP setting, unit tests would ideally be able to run without starting the platform (which takes rather long). I.e. they would not require to be run as PDE JUnit Tests but as plain JUnit Tests instead. Therefore the unit under test should be isolated from the RCP APIs.
On a related note, see also this question: How to efficiently JUnit test Eclipse RCP Plugins
SWTBot
While SWTBot uses the JUnit runtime to execute the tests, it is rather meant as a utility to create integration or functional tests. SWTBot, when used with RCP, starts the entire workbench and runs all tests within the same instance. Therefore great care should be taken to ensure that each test leaves the environment in the same state as it was before the test started. Specialized Rules may help here to set up and tear down a particular recurring scenario.
It is perfectly valid in order to setup an SWTBot test to call methods from your application. For example, you could programmatically open the wizard and then use SWTBot to simulate a user that enters data and presses the OK button. There is no need to use SWTBot to laboriously open the wizard itself.
In my experience, SWTBot is even too much for simple use cases. Consider a test that should enter some data into a dialog and then press OK. If you already have the dialog opened programmatically you can as well continue without SWTBot:
dialog.textField.setText( "data" );
dialog.okButton.notifyListeners( SWT.Selection, null );
assertThat( dialog.getEnteredData() ).isEqualTo( "data" );
Use Both
The best bet is to have both, unit tests that ensure the behavior of the respective units and functional tests that make sure that the particular units play together as desired.
Not sure if that answers the question, if you have further concerns please leave a comment.

When to perform my unit test and why use Moq

Q1: When is it ideal to run unit test? Should it be ran before each time I go to debug the app? Should they be ran before I commit changes to svn? I think if an app only has a couple of unit test it should be ran each time the app is about to debug. But lets say we hundreds of unit test that can take a bit of time to complete, not sure if this is ideal or not. I think then it would be better to just run them before committing or deploying.
Q2: In my app Im using a repository pattern with a service layer. I've done some research on how to test a service when the service is calling a repository and the repository is querying db. So in order for it to be a true unit test and not an integration test, I have to find a way to test without touching the database. I found people are using Moq to mock their repository. Here's where I have a problem, to me it seems if I mock a repository then I'm changing the behavior of how the method is suppose to work and to me seems like a pointless unit test. It doesn't seem you are actually testing your code. Am I completley wrong about this? Thanks for any advice.
Let me take a shot.
A1: When you refactor existing code, you should execute the corresponding unit tests (not all) and see if anything is broken by your changes. For new functionality you should implement new unit tests in parallel using TDD. You should never execute all the unit tests by your own but should use or rely on continuous integration.
A2: I had a same opinion like you. But now, I am convinced that unit testing for service layer is required. Whatever that can be covered using unit testing should be covered. At this point, the core of your services might just be a delegation to repositories but services evolves. The services takes up the responsibility of parameter validation, authorization, logging, transactions, batch-support API etc. Then, it is not only data-access but many more things. If I were in your place, I would go for unit testing of services by mocking repositories. Sometimes, services provide convenient methods on top of the repository.
Hope it might be of some help to you.
A1. When making changes to your code the more often you run the unit tests the faster you will get feedback on whether the behavior they were written to assert has been affected, so the more often the better! Unit tests should be very fast and running several hundred should only take a couple of minutes at most, but it might be worth looking into infinitest (if working with java, i expect an alternative will exist for .net etc) it is a plugin to eclipse and automatically runs your unit tests when eclipse builds your project. It is clever enough to run only the tests that have been affected since the last time it ran, e.g. if you update a test, or if you update some "application" code that is covered by some unit tests the specific tests will be executed.
A2. Unit tests will cover many different scenarios that will call your services + daos many times, using "real" services will make it difficult to guarantee the results of each call (and setting up the data for each test can be painful), but also the results can be slow. It's usually better when unit testing to mock these services and testing them independently with integration tests.

Unit Test? Integration Test? Regression Test? Acceptance Test?

Is there anyone that can clearly define these levels of testing as I find it difficult to differentiate when doing TDD or unit testing. Please if anyone can elaborate how, when to implement these?
Briefly:
Unit testing - You unit test each individual piece of code. Think each file or class.
Integration testing - When putting several units together that interact you need to conduct Integration testing to make sure that integrating these units together has not introduced any errors.
Regression testing - after integrating (and maybe fixing) you should run your unit tests again. This is regression testing to ensure that further changes have not broken any units that were already tested. The unit testing you already did has produced the unit tests that can be run again and again for regression testing.
Acceptance tests - when a user/customer/business receive the functionality they (or your test department) will conduct Acceptance tests to ensure that the functionality meets their requirements.
You might also like to investigate white box and black box testing. There are also performance and load testing, and testing of the "'ilities" to consider.
Unit test: when it fails, it tells you what piece of your code needs to be fixed.
Integration test: when it fails, it tells you that the pieces of your application are not working together as expected.
Acceptance test: when it fails, it tells you that the application is not doing what the customer expects it to do.
Regression test: when it fails, it tells you that the application no longer behaves the way it used to.
Here's a simple explanation for each of the mentioned tests and when they are applicable:
Unit Test
A unit test is performed on a self-contained unit (usually a class or method) and should be performed whenever a unit has been implemented or updating of a unit has been completed.
This means it's run whenever you've written a class/method, fixed a bug, changed functionality...
Integration Test
Integration test aims to test how well several units interact with each other. This type of test should be performed Whenever a new form of communications has been established between units or the nature of their interaction have changed.
This means it's run whenever a recently written unit is integrated into the rest of the system or whenever a unit which is interacts with other systems has been updated (and successfully completed its unit tests).
Regression Test
Regression tests are performed whenever anything has been changed in the system, in order to check that no new bugs have been introduced.
This means it's run after all patches, upgrades, bug fixes. Regression testing can be seen as a special case of combined unit test and integration test.
Acceptance Test
Acceptance tests are performed whenever it is relevant to check that a subsystem (possibly the entire system) fulfils its entire specifications.
This means it's mainly run before finishing a new deliverable or announcing completion of a larger task. See this as your final check to see that you've really completed your goals before running to the client/boss and announcing victory.
This is at least the way I learned, though I'm sure there are other opposing views. Either way, I hope that helps.
I'll try:
Unit test: a developer would write one to test an individual component or class.
Integration test: a more extensive test that would involve several components or packages that need to collaborate
Regression test: Making a single change to an application forces you to re-run ALL the tests and check out ALL the functionality.
Acceptance test: End users or QA do these prior to signing off to accept delivery of an application. It says "The app met my requirements."
Unit Test: is my single method working correctly? (NO dependencies, or dependencies mocked)
Integration Test: are my two separately developed modules working corectly when put together?
Regression Test: Did I break anything by changing/writing new code? (running unit/integration tests with every commit is technically (automated) regression testing). More often used in context of QA - manual or automated.
Acceptance Test: testing done by client, that he "accepts" the delivered SW
Can't comment (reputation to low :-| ) so...
#Andrejs makes a good point around differences between environments associated with each type of testing.
Unit tests are run typically on developers machine (and possibly during CI build) with mocked out dependencies to other resources/systems.
Integration tests by definition have to have (some degree) of availability of dependencies; the other resources and systems being called so the environment is more representative. Data for testing may be mocked or a small obfuscated subset of real production data.
UAT/Acceptance testing has to represent the real world experience to the QA and business teams accepting the software. So needs full integration and realistic data volumes and full masked/obfuscated data sets to deliver realistic performance and end user experience.
Other "ilities" are also likely to need the environment to be as close as possible to reality to simulate the production experience e.g. performance testing, security, ...