Testing the results of the endpoints is straight forward.
However, shouldn't I also have a look at the db layer, if (i.e.) the data I POSTed is saved correctly? I'm unclear how/if the business logic behind the REST call should be tested.
In my perfect world (someday I'd like to live there...) there are several different tests involved in fully validating a RESTful service I am providing.
There are unit tests for each of the logical layers of the application, that use dependency injection to mock the next lower layer and validate that each unit works correctly. This might be a unit that marshals/unmarshals the parameters and response, a unit that executes business logic, and a unit that manages persistence. (There could be multiple units, each with their own tests at each layer.) These tests have no dependencies outside the testing framework, with the exception of the persistence layer which has to exercise your persistence implementation.
There are also integration tests that require a running system. This is where you call the running service, and verify that you got the expected response. You may also inspect side effects of the call. On my team, we often do that by making a different service call (or calls) that relies on the result of the first call. That exercises more of the system. We find that direct inspection of the persistence layer for side effects seldom tells us much that we can't get by using a different service call.
Yes, you should test the db layer at some point but probably not in the same unit test than the one that test the result of the endpoint.
At the Unit Test level you would like to have small/quick/independant tests so you might end up with a test/suite for the REST part and another one focussing on db layer.
Then you might have some more integration/end-to-end tests that check the whole system : doing the request and checking the db is correctly updated.
My own experience, working as a QA, is that the tests involving DB are done with integration/end-to-end tests with a functional perspective by the QA using a third-party tool (we use Robot Framework for that).
One way of doing it is use the restful service itself to test that the data correctly persisted,
For example you can use the PUT message to persist an entity and use the GET operation to retrieve it, and make sure all properties are equal. This is more of an integration test which covers end to end.
Especially your application is doing more CRUD operations, and if you use this approach you could avoid creating tests for each and every layer (such as repositories)
Related
I don't understand how I'm testing anything with unit testing.
Suppose I am testing that my repository class can retrieve values from the database correctly. The proper way to do this would be to actually call the real database and retrieve and check those values.
But the idea behind unit testing is that it should be done in isolation, and connecting to a running database is not isolation. So what is usually done is to mock or stub the database.
But why would testing on a fake database with hardcoded data and hardcoded return values even test anything? It seems tautological and a waste of time.
Or am I not understanding how to unit test properly?
Does one even unit test database calls?
I don't understand how I'm testing anything with unit testing.
Short answer: you are testing the logic, and leaving out the side effects.
You aren't testing everything; but you are testing something.
Furthermore, if you keep in mind that you aren't really testing the code with side effects, then you are motivated to arrange your code so that the pieces that actually depend on the side effect are small. The big pieces don't actually care where the data comes from, to those are easy to test.
So "something" can be "most things".
There is an impedance problem -- if your test doubles impersonate the production originals inadequately, then some of your test results will be inaccurate.
my philosophy is to test as little as possible to reach a given level of confidence
Kent Beck, 2008
One way of imagining "as little as possible" is to think in terms of cost -- we're aiming for a given confidence level, so we want to achieve as much of that confidence as we can using cheap unit tests, and then make up the difference with more expensive techniques.
Cory Benfield's talk Building Protocol Libraries the Right Way describes an example of the kind of separation we're talking about here. The logic of how to parse an HTTP message is separable from the problem of reading the bytes. If you make the complicated part easy to test, and the hard to test part too simple to fail, your chances of succeeding are quite good.
I think your concern is valid. For me, TDD is more of an evolutionary design practice than unit testing practice, but I'll save that for another discussion.
In your example, what we are really testing is that the logic contained within your individual classes is sound. By stubbing the data coming from the database you have a controlled scenario that you can ensure your code works for that particular scenario. This makes it much easier to ensure full test coverage for all data scenarios. You're correct that this really doesn't test the whole system end to end, but the point is to reduce the overall test maintenance costs and enable faster feedback.
My approach is to mock most collaborators at the unit test level, then write acceptance tests at the integration test level, which validates your system using real data. Because the unit tests with their mocked data allows you to test various data scenarios out, you only need to test a few of those scenarios using integration tests to feel confident that your code will perform as you expect.
You can test your code against actual database in isolation. Just create new database instance for every test, or execute tests synchronously one after another and clean database before next test.
But using actual database will make your tests slow, which will slow down your work, because you want quick feedback on what you are doing.
Do not test every class - test main feature logic, which can use many different classes and mock/stub only dependencies which makes tests slow.
Find your application boundaries and tests logic between them without mocking.
For example in trivial web api application boundaries can be:
- controller action -> request(input)
- controller action -> response(output)
- database -> side effect of received request.
Assume we live in perfect world where new database and web server setup will takes milliseconds. Then you will tests whole pipeline of your application:
1. Configure database for test
2. Send request to the web api server
3. Assert that response contains expected data
4. Assert that database state changed as expected
But in now days world your boundaries will be controller action and abstracted database access point. Which makes your test look like below:
1. Configure mocked database access point(repository)
2. Call controller action with given parameters
3. Assert that action returns expected result
4. Possibly assert that mocked repository received expected update arguments.
If your application have no logic, just read/update data from database - test with actual database or, if your database framework allows it, use database in-memory.
I'm trying to add unit-test and integration-test for my grails application but I have some trouble how to distinguish between both and when to use unit or integration to test my controllers actions and services.
The tutorial I found online is not very clean. I can't find complete example to follow up.
Can you please share helpful topics?
I follow the following guidelines:
Try writing as many unit tests as you can. They can be written for controllers, services, domain classes or any other groovy classes. The idea is that unit tests are friends for developers. Writing enough unit tests will
make sure that the developer makes lesser mistakes. As they execute
quickly, this means quick verification. But unit tests cannot test the following:
Criteria queries, HQL queries
Actual database Interactions (queries, transactional behaviour, updates, db constraints etc.)
Inter modular interactions
So we write the Integration tests as well
Integration tests take longer to execute. Writing Integration tests often need bootstrapping data. But they really are helpful to test functionalities end to end (excluding the actual user interactions through UI for which functional tests are written). So Integration Tests can be written for:
Testing all database interactions as unit tests actually do not test the database interactions. This also includes testing criteria, hql etc.,
Testing transactional behaviour (which is dependent on db)
Testing implementations end to end. So this will also test how two independently created modules interact with each other and make sure we have created them correctly.
One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.
Important links:
http://spockframework.org/spock/docs/1.0/interaction_based_testing.html
http://docs.grails.org/latest/guide/testing.html
Unfortunately it is not just a matter preference or speed. It is a huge subject, but I can give you some advice based on my experience.
If you expect to be covering your database access code (queries, transactional behaviour) by using unit tests, you are deluding yourself. You are testing how your queries comply with the in-memory implementation of GORM. Not hibernate, not your database.
I usually have two types of tests. Unit and functional tests. The functional tests will perform a full test, running against a real database, and stimulating the system like a user would (if it is a web site via Geb, if it is a REST api, via a REST client).
The functional tests will set up a startup state by executing some kind of fixture code first. This can be registering a user and logging them in, for example. Then the test will run, and then the postconditions are checked. Here, you can check the postconditions either by accessing the database through the GORM API, or by using production API calls (danger of covering a bug with another bug).
Sometimes, your system will interact with a third system. Here, if you can, you can mock the implementation of the third system, by injecting a mock implementation into the system under test.
You have also tools like Spring Cloud Contract, that allow you to create bock a mock server for your system under test, and a specification for your third-party system. See https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html
The unit tests, I use to thoroughly test all execution paths of a given class. I will try to trigger all exception states, all secondary scenarios, to make sure that everything is covered. I don't think it is realistic to have 100% coverage by using functional or integration tests.
Upon reading Growing Object Orientated Software Guided by Tests, I learnt of test isolation and test fragility. The idea that each test should be very specific to a piece of code or functionality, and the overlap of code coverage by tests should be kept to a minimum.
The implied ideal that each change in the code should result in breaking only one test.
Avoiding spending time going through multiple broken tests to confirm that one change is the cause and if it is fixed by the test modifications.
Now this seems easy enough for unit tests, they are very isolated by their nature.
However when presented by integration tests, it seems hard to avoid having multiple tests exercising the same code paths, particularly when run in addition to the unit tests.
So my question, is what dependencies should be mocked when doing integration testing? Should anything be mocked at all? Should a single execution path be tested, and all side effects not directly relevant to this code path be mocked?
I'm toying with the idea of doing pairwise integration testing. Test one relationship between two objects, and mock everything else. Then changes in either one of these objects should have minimal impact on other integration tests, in addition to forming a complete chain of end-to-end tests by means of pairs.
Thanks for any info..
Edit: Just to clarify, I'm basically asking "How do I avoid large numbers of failing integrations tests during the normal course of development?". Which I assume is achieved by using mocks, and why I asked about what to mock.
Update: I've found a very interesting talk about Integration tests by J.B.Rainsberger, which I think answers this fairly well, if perhaps a bit controversially. The title is "Integration Tests are a Scam", so as you can guess, he does not advocate Integration Tests at all (end to end type tests). The argument being that Integration Tests will always be far below the amount needed to thoroughly test the possible interactions (due to combinatoric explosion), and may give a false confidence.
Instead he recommends what he calls Collaboration Tests and Contract Tests. It's a 90 min talk and unfortunately the whiteboard is not very clear and there aren't code examples, so i'm still getting my head around it. When I have a clear explanation I'll write it here! Unless someone else beats me to it..
Here's a brief summary of Contract Tests. Sounds like Design by Contract type assertions, which I believe could/would be implemented in a Non-Virtual Interface pattern in C++.
http://thecodewhisperer.tumblr.com/post/1325859246/in-brief-contract-tests
Integration Tests are a Scam video talk:
http://www.infoq.com/presentations/integration-tests-scam
Summary:
Integration tests are a scam. You’re probably writing 2-5% of
the integration tests you need to test thoroughly. You’re probably
duplicating unit tests all over the place. Your integration tests
probably duplicate each other all over the place. When an integration
test fails, who knows what’s broken? Learn the two-pronged attack that
solves the problem: collaboration tests and contract tests.
For integration tests you should mock the minimum amount of dependencies to get the test working, but not less :-)
Since the integration of the components in your system is obviously the thing you want to test during integration testing, you should use real implementations as much as possible. However, there are some compontents you obviously want to mock, since you don't want your integration tests to start mailing your users for instance. When you don't mock these dependencies, you obviously mock too little.
That doesn't mean btw that you shouldn't allow an integration test to send mails, but at least you want to replace the mail component with one that will only send mail to some internal test mail box.
For integration tests I lean towards mocking the service rather than the representation for example using mirage instead of a 3rd party REST API and Dumpster rather than a real SMTP server.
This means that all layers of your code are tested, but none of the 3rd parties are tested so you are free to refactor without worrying that the tests will fail.
Unit tests should have mock objects, but integration tests should have few if any mocks (otherwise what is being integrated ?) I think it's overkill to do pairwise mocking; it will lead to an explosion of tests that might each take a long time and lots of copy and paste code which will be a pain to change if requirements change or new features are added later.
I think it's fine to to not have any mocks in the integration tests. You should have everything mocked in the unit tests to know that each individual unit works as expected in isolation. The integration test tests that everything works wired together.
The discussion of Contract/Collaborator test pattern (described by JB Rainsberger in "Integration Tests are a Scam" mentioned in the question above). Relating to the question here - I interpreted his talk to mean that when you own the code for both the service side and the client side, then you should not need any integration test at all. Instead you should be able to rely on mocks which implement a contract.
The talk is a good reference for high level description of the pattern but doesn't go into detail (for me at least) about how to define or reference a contract from a collaborator.
One common example of the need for Contract/Collaborator pattern is between an API's server / client (for which you own the code of both). Here's how I've implemented it:
Define the contract:
First define the API Schema, if your API uses JSON you might consider JSONSchema. The schema definition can be considered the "Contract" of the API. (And as a side note, if you're about to do that, make sure you know about RAML or Swagger since they essentially make writing JSONSchema APIs alot easier)
Create fixtures which implement the contract:
On the server side, mock out the client requests to allow unit testing of the requests/responses. To do this you will create client request fixtures (aka mocks). Once you have your API defined, validate the fixtures against the JSONSchema to ensure that they comply. There are a host of schema validators - I currently use AJV (Javascript) and jsonschema (Python), but most languages should have an implementation.
On the client(s) side you will likely mock out the server responses to allow unit testing of the requests. Follow the same pattern as the server, validating the request and response fixtures via JSONSchema.
If both the Client and the Server are validating their fixtures against the contract, then whenever the API Contract changes, the out of date implementations on both sides will fail JSONSchema validation and you'll know it's time to update your fixtures and possibly the code which relies on those fixtures.
I have a REST web-service interface that calls-down to a service layer which orchestrates the creation, deletion, etc. of various objects in an entity-layer. These entity-layer objects ultimately map to database records. I have a number of unit tests (in nunit, it a c# application) that test this interface by sending http requests.
Consider my testing of a web service request that creates a some entity-layer object. I obviously want to verify that the web service considers the request to have been processed correctly, by checking the http status that it returns to me plus some data in the response body. I also want to independently verify that the correct database records have been created. I have a couple of ways (that I can think of) to do this:
The easiest way is to use existing 'reader' classes in the entity layer to read and validate the database entries. This is easy because they incorporate the validation and consistency logic for the entities they deal with, and using them is simple. I am uneasy about this, though, because I would be using the code I'm testing as part of the test. This seems to violate some principle of separation of concerns, and also introduce the possibility of an entity-layer bug causing the object creation to fail but appear to the unit test to have succeeded.
Alternatively, the test code could go straight to the database and do the checks itself. But then I'm embedding details about object storage and consistency rules in the test - which makes the test brittle if those details change, and also effectively means re-implementing, in the unit tests, the code I've already written in the entity layer.
I wonder what people think of the trade-offs involved with these (and maybe other) options, and what (if any) is the best practice? I'm not sure if there is a right or wrong answer, but I've wondered about it for a while and interested in other opinions.
EDIT
To clarify, I save separate test suites for the service-layer and the entity-layer. The concerns I have expressed -- using tested code in a test -- also apply to these tests.
We see two different tests, a test of the service methods and a test of the webclient.
For testing the service methods (like a reader), you may want to create a database with predefined values (test data), call the reader, and test, if the readers output matches the test data in the required way.
Once you've tested the service methods, you can move to the next test level and test the web client, again using the same test data but now testing if data shown on the web client matches the test data in the required way. On this test level, you can "trust" the readers (because they have been tested before).
Maybe you feel more comforable if you separate between "unit testing" and "integration testing". For unit testing, verify that a compilation unit works as required. This could be testing the reader: you populate the database with defined data, call findAll() (or something else), and assert that the test data and only the test data is in the result.
The other test is an integration test - there you verify, that service layer and entity layer work together as expected. Same with testing the web client: you verify that the client / service layer works as required.
And for integration tests I don't see any reason to not use (tested) service layer methods.
Just putting this one out for debate really.
I get unit testing. Sometimes feels time consuming but I'm all for the benefits.
I've an application set up that contains a repository layer and a service layer, using IoC, and I've been unit testing the methods.
Now I know the benefits of isolating my methods for unit testing so there is little or no dependency on other methods.
The question I've got is this. If I only ever access my repository methods through my service layer methods would only testing the service layers not be good enough? I'm testing against a test database.
Could it not be considered an extension of the idea that you only need to test your public methods? Maybe I'm just trying to skip some testing ;)
Yes, you should test your repository layer. Although the majority of these tests fall into a different classification of tests. I usually refer to them as integration tests to distinguish them from my unit tests. The difference being that there is an external dependency on a resource (your database) and that these tests will likely take much longer to run.
The primary reason for testing your repositories separately is that you'll be testing different things. The repository is responsible for handling translation and interaction with whatever persistence store you're using. The service layer, on the other hand, is responsible for coordinating your various respositories and other dependencies into functionality that represents business logic, which likely involves more than just a relay to a repository method and in some instances may involve multiple calls to multiple repositories.
First, to clarify the service layer testing - when testing the service layer, the repositories should be mocked so that they are isolated from what you're testing in the service layer. As you pointed out in your comment, this gives you a more granular level of testing and isolates the code under test. Your unit tests will also run much faster now because there are no database connections slowing them down.
Now, here are a few advantages of adding integration tests to your repositories...
It allows you to test out those pieces of code as you're writing them, a la TDD.
It ensures that whatever persistence language you're using (SQL, HQL, serialized objects, etc.) is formulated correctly for the operation you're attempting to perform.
If you're using an object-relational mapper, it ensures that your mappings are defined correctly.
In the future, you may find that you need to support another type of persistence. Depending on how your repository tests are structured, you may be able to reuse a large number of the tests to verify that the new database schema works correctly. For repository methods that implement database specific logic, obviously you'll have to create separate tests.
When coupled with Continuous Integration it's nice to have the repository tests separated. Integration tests, by nature take longer to run than unit tests. As such, they're usually run at less frequent intervals so that the immediate feedback available from running unit tests is not delayed.
Those are all advantages that I've seen in various projects that I've worked on. There may be more.
All that having been said, I will admit that I'm not as thorough with the repository integration tests as I am with unit tests. When it comes to testing an update on a particular object, for example, I'm usually content testing that one database column was successfully updated rather than creating a separate test for each individual column or a larger test that verifies every column in one test. For me, it depends on the complexity of the operation that the respository method is performing and whether there's any special condition that needs to be isolated.
You should test your repository layer. However if you have integration, story or system tests that cover it, then you can make a good case of not having unit tests as well.
Unit testing is great for complex stand-a-lone objects, but there is no point spending a long time writing unit tests for simple methods that are covered by “higher level” tests.
Wouldn't this depend on how how smart the repository access layer is? If your repository takes parameters to filter (Linq to SQL for example) the given result set surely this logic will need to be tested.
Unit tests: test an individual logic (a method) without worrying the dependency of that logic. Mostly falls in white box category.
Integration test: can test end to end flow or more than one layer together to ensure its correctness. Mostly falls in black box category.
In Dao most of the time there is no business logic, it just forms a query for a particular database implementation. So no need for a unit test if we already covered it in our integration test. Still, we can write unit tests for Dao if there is some logic in it.
As dao layers are so tightly coupled with database implementation, most of the time junit test for dao has become synonyms for testing of underlying databases.
The query we build can only be validated by the underlying Database engine.
I used to write unit tests (can call integration tests) for dao by using actual database or mocking a database with a compatible database(follow the same sql standard ,for example mysql engine can be replaced by sqlite or in memory H2 database) and inject this database in dao for testing the dao layer and query build in that dao layer.
I get unit testing
Next step is Test Driven Development (TDD). It will answer your question.