I've got a set up with a number of layers:
Website
Application / Service
Domain (contains entities)
Persistence (contains repositories)
I'm testing the persistence layer in isolation OK using data created in memory from a stub object.
Now, Im thinking about testing my Website layer. I know I should be testing it in isolation which Im thinking means creating a stub for the Application layer object it uses but this stub would need its own set of in memory data, duplicated in the stub in the persistence layer and I dont want to do duplicate this and manage it.
So my question is should the subject under test always work with stub objects from the layer below in order to be isolated and do they normally have their own set of data? Or is OK for my web method under test to call a lightweight object in the Application Layer which calls the Persistence layer with stub data?
Thanks for your help. This feels like the last bit of the puzzle for me ...
Ideally in unit testing each subject under test is isolated from its dependencies. You do not want to think your subject under test is broken because one of its dependencies broke and caused the subject under test to fail. If you test like this, you might spend a lot of time tracking down bugs in the wrong place.
Testing how things operate together is the domain of integration testing, not unit testing
Or is OK for my web method under test to call a lightweight object in the Application Layer which calls the Persistence layer with stub data?
If you do this, I wouldn't call the test a unit test in isolation anymore - if the test fails, where is the bug? - but an integration test. Don't misinterpret me, integration testing is fine too, it just has another purpose. But if your goal is to unit test the website layer in isolation, you should mock/stub direct dependencies.
Setting up test data can be tedious task. If you are using DotNet you can make use of a library called NBuilder to generate test data very easily and quickly. It supports a nice fluent interface. You can read more about it here.
Related
Testing the results of the endpoints is straight forward.
However, shouldn't I also have a look at the db layer, if (i.e.) the data I POSTed is saved correctly? I'm unclear how/if the business logic behind the REST call should be tested.
In my perfect world (someday I'd like to live there...) there are several different tests involved in fully validating a RESTful service I am providing.
There are unit tests for each of the logical layers of the application, that use dependency injection to mock the next lower layer and validate that each unit works correctly. This might be a unit that marshals/unmarshals the parameters and response, a unit that executes business logic, and a unit that manages persistence. (There could be multiple units, each with their own tests at each layer.) These tests have no dependencies outside the testing framework, with the exception of the persistence layer which has to exercise your persistence implementation.
There are also integration tests that require a running system. This is where you call the running service, and verify that you got the expected response. You may also inspect side effects of the call. On my team, we often do that by making a different service call (or calls) that relies on the result of the first call. That exercises more of the system. We find that direct inspection of the persistence layer for side effects seldom tells us much that we can't get by using a different service call.
Yes, you should test the db layer at some point but probably not in the same unit test than the one that test the result of the endpoint.
At the Unit Test level you would like to have small/quick/independant tests so you might end up with a test/suite for the REST part and another one focussing on db layer.
Then you might have some more integration/end-to-end tests that check the whole system : doing the request and checking the db is correctly updated.
My own experience, working as a QA, is that the tests involving DB are done with integration/end-to-end tests with a functional perspective by the QA using a third-party tool (we use Robot Framework for that).
One way of doing it is use the restful service itself to test that the data correctly persisted,
For example you can use the PUT message to persist an entity and use the GET operation to retrieve it, and make sure all properties are equal. This is more of an integration test which covers end to end.
Especially your application is doing more CRUD operations, and if you use this approach you could avoid creating tests for each and every layer (such as repositories)
I have a REST web-service interface that calls-down to a service layer which orchestrates the creation, deletion, etc. of various objects in an entity-layer. These entity-layer objects ultimately map to database records. I have a number of unit tests (in nunit, it a c# application) that test this interface by sending http requests.
Consider my testing of a web service request that creates a some entity-layer object. I obviously want to verify that the web service considers the request to have been processed correctly, by checking the http status that it returns to me plus some data in the response body. I also want to independently verify that the correct database records have been created. I have a couple of ways (that I can think of) to do this:
The easiest way is to use existing 'reader' classes in the entity layer to read and validate the database entries. This is easy because they incorporate the validation and consistency logic for the entities they deal with, and using them is simple. I am uneasy about this, though, because I would be using the code I'm testing as part of the test. This seems to violate some principle of separation of concerns, and also introduce the possibility of an entity-layer bug causing the object creation to fail but appear to the unit test to have succeeded.
Alternatively, the test code could go straight to the database and do the checks itself. But then I'm embedding details about object storage and consistency rules in the test - which makes the test brittle if those details change, and also effectively means re-implementing, in the unit tests, the code I've already written in the entity layer.
I wonder what people think of the trade-offs involved with these (and maybe other) options, and what (if any) is the best practice? I'm not sure if there is a right or wrong answer, but I've wondered about it for a while and interested in other opinions.
EDIT
To clarify, I save separate test suites for the service-layer and the entity-layer. The concerns I have expressed -- using tested code in a test -- also apply to these tests.
We see two different tests, a test of the service methods and a test of the webclient.
For testing the service methods (like a reader), you may want to create a database with predefined values (test data), call the reader, and test, if the readers output matches the test data in the required way.
Once you've tested the service methods, you can move to the next test level and test the web client, again using the same test data but now testing if data shown on the web client matches the test data in the required way. On this test level, you can "trust" the readers (because they have been tested before).
Maybe you feel more comforable if you separate between "unit testing" and "integration testing". For unit testing, verify that a compilation unit works as required. This could be testing the reader: you populate the database with defined data, call findAll() (or something else), and assert that the test data and only the test data is in the result.
The other test is an integration test - there you verify, that service layer and entity layer work together as expected. Same with testing the web client: you verify that the client / service layer works as required.
And for integration tests I don't see any reason to not use (tested) service layer methods.
Just putting this one out for debate really.
I get unit testing. Sometimes feels time consuming but I'm all for the benefits.
I've an application set up that contains a repository layer and a service layer, using IoC, and I've been unit testing the methods.
Now I know the benefits of isolating my methods for unit testing so there is little or no dependency on other methods.
The question I've got is this. If I only ever access my repository methods through my service layer methods would only testing the service layers not be good enough? I'm testing against a test database.
Could it not be considered an extension of the idea that you only need to test your public methods? Maybe I'm just trying to skip some testing ;)
Yes, you should test your repository layer. Although the majority of these tests fall into a different classification of tests. I usually refer to them as integration tests to distinguish them from my unit tests. The difference being that there is an external dependency on a resource (your database) and that these tests will likely take much longer to run.
The primary reason for testing your repositories separately is that you'll be testing different things. The repository is responsible for handling translation and interaction with whatever persistence store you're using. The service layer, on the other hand, is responsible for coordinating your various respositories and other dependencies into functionality that represents business logic, which likely involves more than just a relay to a repository method and in some instances may involve multiple calls to multiple repositories.
First, to clarify the service layer testing - when testing the service layer, the repositories should be mocked so that they are isolated from what you're testing in the service layer. As you pointed out in your comment, this gives you a more granular level of testing and isolates the code under test. Your unit tests will also run much faster now because there are no database connections slowing them down.
Now, here are a few advantages of adding integration tests to your repositories...
It allows you to test out those pieces of code as you're writing them, a la TDD.
It ensures that whatever persistence language you're using (SQL, HQL, serialized objects, etc.) is formulated correctly for the operation you're attempting to perform.
If you're using an object-relational mapper, it ensures that your mappings are defined correctly.
In the future, you may find that you need to support another type of persistence. Depending on how your repository tests are structured, you may be able to reuse a large number of the tests to verify that the new database schema works correctly. For repository methods that implement database specific logic, obviously you'll have to create separate tests.
When coupled with Continuous Integration it's nice to have the repository tests separated. Integration tests, by nature take longer to run than unit tests. As such, they're usually run at less frequent intervals so that the immediate feedback available from running unit tests is not delayed.
Those are all advantages that I've seen in various projects that I've worked on. There may be more.
All that having been said, I will admit that I'm not as thorough with the repository integration tests as I am with unit tests. When it comes to testing an update on a particular object, for example, I'm usually content testing that one database column was successfully updated rather than creating a separate test for each individual column or a larger test that verifies every column in one test. For me, it depends on the complexity of the operation that the respository method is performing and whether there's any special condition that needs to be isolated.
You should test your repository layer. However if you have integration, story or system tests that cover it, then you can make a good case of not having unit tests as well.
Unit testing is great for complex stand-a-lone objects, but there is no point spending a long time writing unit tests for simple methods that are covered by “higher level” tests.
Wouldn't this depend on how how smart the repository access layer is? If your repository takes parameters to filter (Linq to SQL for example) the given result set surely this logic will need to be tested.
Unit tests: test an individual logic (a method) without worrying the dependency of that logic. Mostly falls in white box category.
Integration test: can test end to end flow or more than one layer together to ensure its correctness. Mostly falls in black box category.
In Dao most of the time there is no business logic, it just forms a query for a particular database implementation. So no need for a unit test if we already covered it in our integration test. Still, we can write unit tests for Dao if there is some logic in it.
As dao layers are so tightly coupled with database implementation, most of the time junit test for dao has become synonyms for testing of underlying databases.
The query we build can only be validated by the underlying Database engine.
I used to write unit tests (can call integration tests) for dao by using actual database or mocking a database with a compatible database(follow the same sql standard ,for example mysql engine can be replaced by sqlite or in memory H2 database) and inject this database in dao for testing the dao layer and query build in that dao layer.
I get unit testing
Next step is Test Driven Development (TDD). It will answer your question.
I must be doing something fundamentally wrong.
I am implmenting my repositories and then testing them with mocked data. All is well.
Now I want to test my domain objects so I point them at mock repositories.
But I'm finding that I have to re-implement logic from the 'real' repositories into the mocks, or, create 'helper classes' that encapsulate the logic and interact with the repositories (real or mock), and then I have to test those too.
So what am I missing - why implement and test mock repositories when I could use the real ones with mocked data?
EDIT: To clarify, by 'mocked data' I do not hit the actual database. I have a 'DB mock layer' I can insert under the real repositories that returns known-data.
Maybe your mixing your abstractions. maybe some of the helpers and logic that your talking about should be in your domain objects. Your repositories should be implementing a CRUD interface. So your domain logic should be using 8 actions from the repository. Good retrieve and Bad retrieve (exception thrown) are only two of the eight. One test, is how is the Bad retrieve handled in the domain object. The rest of the tests should be testing a Good retrieve.
Unit testing is meant to minimize the tested element -> test data route, in order to reduce search time if a bug occurs - the bug is either in the routine you test or the test unit and nothing else (well, maybe the OS or the compiler... but these are very rare).
Now imagine you made a database accessor module. The module seems to work ok. Then you make a config reader that reads from the database. Instead of a mock database accessor module unit test, you use your real database module on top of mock config data in the database test unit. It still seems to work OK. Now you add a GUI layer that uses the config object returned by the config reader unit. Instead of mocking the "boiled" config data you use the real module which apparently works.
And now your GUI stops working for a certain value. Where is the error? In the GUI, in the config reader, in the database accessor or in the values in the mock database?
In other words, you're not just testing the code you're writing, you're testing the real repository system as well, and if a bug is found you have an additional layer to analyze.
why implement and test mock
repositories when I could use the real
ones with mocked data?
I guess it really comes down to how much code coverage you think you need to have. If you feel the repository is sound and you can use it for testing, go for it. Perhaps the repository unit tests can be run prior to the mock database unit tests, so that you can use the real repositories for testing and feel confident that the testing is still valid.
I'm fairly new to unit testing. I have a site built in the 3-tier architecture, UI -> BLL -> DAL. And I used the asp.net Provider Model, so by making changes to web.config I can switch to DAL dll to target different data sources, to be complete the DAL is written with Entity Framework.
Now, my question is how do I unit test my BLL? I use NUnit.
If run/debug my site, the asp.net/IIS loads everything and gets the correct configuration from web.config, so things work, that is because the entry point is from IIS. Now if I use NUnit gui to test and say I have my test project "MySite.Test.dll" which have test cases to my BLL, how does the testing framework get the correct configuration to successfully run all the test. It needs the info in web.config to load the correct provider!
Now, in my DAL there is a App.config created there by EntityFramework, and in it there is just the connectionString. Should I put all the provider related configuration in this app.config? Or am I missing some big picture on how to correctly do this?
This should be a common thing I imagine people need to do constantly. Could someone give some detail on how do I unit test my lib.
Thank you,
Ray.
Edit: After reading the first 2 answers I think I should correct my description with integration testing. Basically instead of IIS as the entry point, use GUI tools like the NUnit to run and test my code through, so NUnit -> BLL -> DAL. How do people actually set it up?
Thanks,
Ray.
Looks like what you are trying to do - is integration testing...
Unit testing, by definition, testing Plain Old .Net Classes in isolation. No database, no configuration...so...as I see it, to do proper unit testing, you need to refactor your BLL to service layer and domain logic classes which you will test separately. Like: Service layer uses domain logic classes, and your unit test uses them. So, domain classes do not go to database, and you do not need connection strings and everything.
However, if you want to do proper integration testing with database, you might want to do that too. If this is what you need - google it, it's not difficult to get some configuration strings in nunit.config or something...I don't know the details.
However, I feel what you want to do is unit testing and not integration testing..
Ask yourself, WHAT EXACTLY DO I WANT TO TEST?
Unit testing does not test "everything". Refactor, invert dependencies, and try to test you business logic in isolation.
Instead of a "web.config" file in your unit test project, you'll need a "MySite.Test.dll.config" file instead where you can enter the correct configuration for the testing. Note, using this method you could use a different provider to connect to an in-memory database instead if you wanted.
You have a couple of different options depending on how isolated you want to be from the DAL. If you want to involve the DAL in your tests, then you can copy the connection string section of the app.config file to an app.config file in your unit test project. As #badbadboy says, though, this really is integration testing, not unit testing.
If you want to do proper unit testing you probably want to use dependency injection and interfaces to allow you to mock out the DAL from your BLL. I use LINQ-to-SQL so what I do is create an interface and wrapper class around the DataContext. This allows me to create a mock database to use for unit testing to test my entity classes in isolation.
Testing your entity classes in isolation will make your tests less brittle and allow you to set up the data for each test independently. This makes it much easier to maintain.
You also might want to look into a mocking framework that will make it almost trivial to generate the mock objects. I've had pretty good success with Rhino Mocks, but there are others.