I'm trying to add unit-test and integration-test for my grails application but I have some trouble how to distinguish between both and when to use unit or integration to test my controllers actions and services.
The tutorial I found online is not very clean. I can't find complete example to follow up.
Can you please share helpful topics?
I follow the following guidelines:
Try writing as many unit tests as you can. They can be written for controllers, services, domain classes or any other groovy classes. The idea is that unit tests are friends for developers. Writing enough unit tests will
make sure that the developer makes lesser mistakes. As they execute
quickly, this means quick verification. But unit tests cannot test the following:
Criteria queries, HQL queries
Actual database Interactions (queries, transactional behaviour, updates, db constraints etc.)
Inter modular interactions
So we write the Integration tests as well
Integration tests take longer to execute. Writing Integration tests often need bootstrapping data. But they really are helpful to test functionalities end to end (excluding the actual user interactions through UI for which functional tests are written). So Integration Tests can be written for:
Testing all database interactions as unit tests actually do not test the database interactions. This also includes testing criteria, hql etc.,
Testing transactional behaviour (which is dependent on db)
Testing implementations end to end. So this will also test how two independently created modules interact with each other and make sure we have created them correctly.
One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.
Important links:
http://spockframework.org/spock/docs/1.0/interaction_based_testing.html
http://docs.grails.org/latest/guide/testing.html
Unfortunately it is not just a matter preference or speed. It is a huge subject, but I can give you some advice based on my experience.
If you expect to be covering your database access code (queries, transactional behaviour) by using unit tests, you are deluding yourself. You are testing how your queries comply with the in-memory implementation of GORM. Not hibernate, not your database.
I usually have two types of tests. Unit and functional tests. The functional tests will perform a full test, running against a real database, and stimulating the system like a user would (if it is a web site via Geb, if it is a REST api, via a REST client).
The functional tests will set up a startup state by executing some kind of fixture code first. This can be registering a user and logging them in, for example. Then the test will run, and then the postconditions are checked. Here, you can check the postconditions either by accessing the database through the GORM API, or by using production API calls (danger of covering a bug with another bug).
Sometimes, your system will interact with a third system. Here, if you can, you can mock the implementation of the third system, by injecting a mock implementation into the system under test.
You have also tools like Spring Cloud Contract, that allow you to create bock a mock server for your system under test, and a specification for your third-party system. See https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html
The unit tests, I use to thoroughly test all execution paths of a given class. I will try to trigger all exception states, all secondary scenarios, to make sure that everything is covered. I don't think it is realistic to have 100% coverage by using functional or integration tests.
Related
I read a lot about unit testing and integration testing, and the unit testing part I understand fairly well - isolate the object under test, and mock out dependencies using interfaces and inject those. (Or use seams to inject testable behavior).
However something that is still a mystery to me even after searching is integration testing. Every blog and link talks about testing various components together, do's and dont's, CI Servers, etc, but there's not much explanation in the way of how this is to be done.
Is an integration test automated? Or is this a manual effort? If it is automated, do I write this as code in the native language my app is in? How do I check or verify if the integration test is working as expected?
For example, if I have 4 services (a Socket Client, a Socket Server, a Database, and a Web Application) and I want to do some integration testing on how these 4 services interact with each other. How exactly how would I approach this? I know that some dummy data will be involved, but which part of my integration test is checking if the systems are working together correctly? This part is really unclear to me.
As you said about unit testing, you need to isolate the object under test, and mock out dependencies using interfaces and inject those...
Integration test is without isolating the unit... You can test two part or more (but you shouldn't have extensive integration test suits neither big test scenario). An example of integration test is testing your code with the database, you could need for that to initiate the database and clean it to have a repeatable tests.
You could have as well interface test (client execution and integration with the server for example) and so on.
Don't forget that this tests come with a downside, they are slower and harder to maintain, but they have different purpose than unit test, which is checking that units works as expected with real dependencies.
So to wrap-up if you write a test without totally isolating the unit it is an integration test, due to its nature it's better to test the logic of your code by unit test and reserve fewer integration tests when needed to test the interaction between the units.
You can also check this nice introduction https://softwareengineering.stackexchange.com/questions/48237/what-is-an-integration-test-exactly
An interesting definition from "the pragmatic programmer": integration tests show that the major parts of a system work well together
Upon reading Growing Object Orientated Software Guided by Tests, I learnt of test isolation and test fragility. The idea that each test should be very specific to a piece of code or functionality, and the overlap of code coverage by tests should be kept to a minimum.
The implied ideal that each change in the code should result in breaking only one test.
Avoiding spending time going through multiple broken tests to confirm that one change is the cause and if it is fixed by the test modifications.
Now this seems easy enough for unit tests, they are very isolated by their nature.
However when presented by integration tests, it seems hard to avoid having multiple tests exercising the same code paths, particularly when run in addition to the unit tests.
So my question, is what dependencies should be mocked when doing integration testing? Should anything be mocked at all? Should a single execution path be tested, and all side effects not directly relevant to this code path be mocked?
I'm toying with the idea of doing pairwise integration testing. Test one relationship between two objects, and mock everything else. Then changes in either one of these objects should have minimal impact on other integration tests, in addition to forming a complete chain of end-to-end tests by means of pairs.
Thanks for any info..
Edit: Just to clarify, I'm basically asking "How do I avoid large numbers of failing integrations tests during the normal course of development?". Which I assume is achieved by using mocks, and why I asked about what to mock.
Update: I've found a very interesting talk about Integration tests by J.B.Rainsberger, which I think answers this fairly well, if perhaps a bit controversially. The title is "Integration Tests are a Scam", so as you can guess, he does not advocate Integration Tests at all (end to end type tests). The argument being that Integration Tests will always be far below the amount needed to thoroughly test the possible interactions (due to combinatoric explosion), and may give a false confidence.
Instead he recommends what he calls Collaboration Tests and Contract Tests. It's a 90 min talk and unfortunately the whiteboard is not very clear and there aren't code examples, so i'm still getting my head around it. When I have a clear explanation I'll write it here! Unless someone else beats me to it..
Here's a brief summary of Contract Tests. Sounds like Design by Contract type assertions, which I believe could/would be implemented in a Non-Virtual Interface pattern in C++.
http://thecodewhisperer.tumblr.com/post/1325859246/in-brief-contract-tests
Integration Tests are a Scam video talk:
http://www.infoq.com/presentations/integration-tests-scam
Summary:
Integration tests are a scam. You’re probably writing 2-5% of
the integration tests you need to test thoroughly. You’re probably
duplicating unit tests all over the place. Your integration tests
probably duplicate each other all over the place. When an integration
test fails, who knows what’s broken? Learn the two-pronged attack that
solves the problem: collaboration tests and contract tests.
For integration tests you should mock the minimum amount of dependencies to get the test working, but not less :-)
Since the integration of the components in your system is obviously the thing you want to test during integration testing, you should use real implementations as much as possible. However, there are some compontents you obviously want to mock, since you don't want your integration tests to start mailing your users for instance. When you don't mock these dependencies, you obviously mock too little.
That doesn't mean btw that you shouldn't allow an integration test to send mails, but at least you want to replace the mail component with one that will only send mail to some internal test mail box.
For integration tests I lean towards mocking the service rather than the representation for example using mirage instead of a 3rd party REST API and Dumpster rather than a real SMTP server.
This means that all layers of your code are tested, but none of the 3rd parties are tested so you are free to refactor without worrying that the tests will fail.
Unit tests should have mock objects, but integration tests should have few if any mocks (otherwise what is being integrated ?) I think it's overkill to do pairwise mocking; it will lead to an explosion of tests that might each take a long time and lots of copy and paste code which will be a pain to change if requirements change or new features are added later.
I think it's fine to to not have any mocks in the integration tests. You should have everything mocked in the unit tests to know that each individual unit works as expected in isolation. The integration test tests that everything works wired together.
The discussion of Contract/Collaborator test pattern (described by JB Rainsberger in "Integration Tests are a Scam" mentioned in the question above). Relating to the question here - I interpreted his talk to mean that when you own the code for both the service side and the client side, then you should not need any integration test at all. Instead you should be able to rely on mocks which implement a contract.
The talk is a good reference for high level description of the pattern but doesn't go into detail (for me at least) about how to define or reference a contract from a collaborator.
One common example of the need for Contract/Collaborator pattern is between an API's server / client (for which you own the code of both). Here's how I've implemented it:
Define the contract:
First define the API Schema, if your API uses JSON you might consider JSONSchema. The schema definition can be considered the "Contract" of the API. (And as a side note, if you're about to do that, make sure you know about RAML or Swagger since they essentially make writing JSONSchema APIs alot easier)
Create fixtures which implement the contract:
On the server side, mock out the client requests to allow unit testing of the requests/responses. To do this you will create client request fixtures (aka mocks). Once you have your API defined, validate the fixtures against the JSONSchema to ensure that they comply. There are a host of schema validators - I currently use AJV (Javascript) and jsonschema (Python), but most languages should have an implementation.
On the client(s) side you will likely mock out the server responses to allow unit testing of the requests. Follow the same pattern as the server, validating the request and response fixtures via JSONSchema.
If both the Client and the Server are validating their fixtures against the contract, then whenever the API Contract changes, the out of date implementations on both sides will fail JSONSchema validation and you'll know it's time to update your fixtures and possibly the code which relies on those fixtures.
Q1: When is it ideal to run unit test? Should it be ran before each time I go to debug the app? Should they be ran before I commit changes to svn? I think if an app only has a couple of unit test it should be ran each time the app is about to debug. But lets say we hundreds of unit test that can take a bit of time to complete, not sure if this is ideal or not. I think then it would be better to just run them before committing or deploying.
Q2: In my app Im using a repository pattern with a service layer. I've done some research on how to test a service when the service is calling a repository and the repository is querying db. So in order for it to be a true unit test and not an integration test, I have to find a way to test without touching the database. I found people are using Moq to mock their repository. Here's where I have a problem, to me it seems if I mock a repository then I'm changing the behavior of how the method is suppose to work and to me seems like a pointless unit test. It doesn't seem you are actually testing your code. Am I completley wrong about this? Thanks for any advice.
Let me take a shot.
A1: When you refactor existing code, you should execute the corresponding unit tests (not all) and see if anything is broken by your changes. For new functionality you should implement new unit tests in parallel using TDD. You should never execute all the unit tests by your own but should use or rely on continuous integration.
A2: I had a same opinion like you. But now, I am convinced that unit testing for service layer is required. Whatever that can be covered using unit testing should be covered. At this point, the core of your services might just be a delegation to repositories but services evolves. The services takes up the responsibility of parameter validation, authorization, logging, transactions, batch-support API etc. Then, it is not only data-access but many more things. If I were in your place, I would go for unit testing of services by mocking repositories. Sometimes, services provide convenient methods on top of the repository.
Hope it might be of some help to you.
A1. When making changes to your code the more often you run the unit tests the faster you will get feedback on whether the behavior they were written to assert has been affected, so the more often the better! Unit tests should be very fast and running several hundred should only take a couple of minutes at most, but it might be worth looking into infinitest (if working with java, i expect an alternative will exist for .net etc) it is a plugin to eclipse and automatically runs your unit tests when eclipse builds your project. It is clever enough to run only the tests that have been affected since the last time it ran, e.g. if you update a test, or if you update some "application" code that is covered by some unit tests the specific tests will be executed.
A2. Unit tests will cover many different scenarios that will call your services + daos many times, using "real" services will make it difficult to guarantee the results of each call (and setting up the data for each test can be painful), but also the results can be slow. It's usually better when unit testing to mock these services and testing them independently with integration tests.
Just putting this one out for debate really.
I get unit testing. Sometimes feels time consuming but I'm all for the benefits.
I've an application set up that contains a repository layer and a service layer, using IoC, and I've been unit testing the methods.
Now I know the benefits of isolating my methods for unit testing so there is little or no dependency on other methods.
The question I've got is this. If I only ever access my repository methods through my service layer methods would only testing the service layers not be good enough? I'm testing against a test database.
Could it not be considered an extension of the idea that you only need to test your public methods? Maybe I'm just trying to skip some testing ;)
Yes, you should test your repository layer. Although the majority of these tests fall into a different classification of tests. I usually refer to them as integration tests to distinguish them from my unit tests. The difference being that there is an external dependency on a resource (your database) and that these tests will likely take much longer to run.
The primary reason for testing your repositories separately is that you'll be testing different things. The repository is responsible for handling translation and interaction with whatever persistence store you're using. The service layer, on the other hand, is responsible for coordinating your various respositories and other dependencies into functionality that represents business logic, which likely involves more than just a relay to a repository method and in some instances may involve multiple calls to multiple repositories.
First, to clarify the service layer testing - when testing the service layer, the repositories should be mocked so that they are isolated from what you're testing in the service layer. As you pointed out in your comment, this gives you a more granular level of testing and isolates the code under test. Your unit tests will also run much faster now because there are no database connections slowing them down.
Now, here are a few advantages of adding integration tests to your repositories...
It allows you to test out those pieces of code as you're writing them, a la TDD.
It ensures that whatever persistence language you're using (SQL, HQL, serialized objects, etc.) is formulated correctly for the operation you're attempting to perform.
If you're using an object-relational mapper, it ensures that your mappings are defined correctly.
In the future, you may find that you need to support another type of persistence. Depending on how your repository tests are structured, you may be able to reuse a large number of the tests to verify that the new database schema works correctly. For repository methods that implement database specific logic, obviously you'll have to create separate tests.
When coupled with Continuous Integration it's nice to have the repository tests separated. Integration tests, by nature take longer to run than unit tests. As such, they're usually run at less frequent intervals so that the immediate feedback available from running unit tests is not delayed.
Those are all advantages that I've seen in various projects that I've worked on. There may be more.
All that having been said, I will admit that I'm not as thorough with the repository integration tests as I am with unit tests. When it comes to testing an update on a particular object, for example, I'm usually content testing that one database column was successfully updated rather than creating a separate test for each individual column or a larger test that verifies every column in one test. For me, it depends on the complexity of the operation that the respository method is performing and whether there's any special condition that needs to be isolated.
You should test your repository layer. However if you have integration, story or system tests that cover it, then you can make a good case of not having unit tests as well.
Unit testing is great for complex stand-a-lone objects, but there is no point spending a long time writing unit tests for simple methods that are covered by “higher level” tests.
Wouldn't this depend on how how smart the repository access layer is? If your repository takes parameters to filter (Linq to SQL for example) the given result set surely this logic will need to be tested.
Unit tests: test an individual logic (a method) without worrying the dependency of that logic. Mostly falls in white box category.
Integration test: can test end to end flow or more than one layer together to ensure its correctness. Mostly falls in black box category.
In Dao most of the time there is no business logic, it just forms a query for a particular database implementation. So no need for a unit test if we already covered it in our integration test. Still, we can write unit tests for Dao if there is some logic in it.
As dao layers are so tightly coupled with database implementation, most of the time junit test for dao has become synonyms for testing of underlying databases.
The query we build can only be validated by the underlying Database engine.
I used to write unit tests (can call integration tests) for dao by using actual database or mocking a database with a compatible database(follow the same sql standard ,for example mysql engine can be replaced by sqlite or in memory H2 database) and inject this database in dao for testing the dao layer and query build in that dao layer.
I get unit testing
Next step is Test Driven Development (TDD). It will answer your question.
I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.