It seems to me that most people write their tests against in-memory, in-process databases like SQLite when working with NHibernate. I have this up and running but my first test (that uses NHibernate) always takes between 3-4 seconds to execute. The next test runs much faster.
I am using FluentNhibernate to do the mapping but get roughly the same timings with XML mapping files. For me the 3-4 second delay seriously disrupts my flow.
What is the recomended way of working with TDD and NHibernate?
Is it possible to mock ISession to unit test the actual queries or can this only be done with in memory databases?
I am using the Repository Pattern to perform Database operations, and whenever I run my Tests I just run the higher-level tests that simply Mock the Repository (with RhinoMocks).
I have a seperate suite of tests that explicitly tests the Repository layer and the NHibernate mappings. And those usually don't change as much as the business and gui logic above them.
That way I get very fast UnitTests that never hit the DB, and still a well tested DB Layer
Unit testing data access is not possible, but you can integration test it.
I create integration test for my data access in a seperate project from my unit tests. I only run the (slow) integration tests when I change something in the repositories, mapping or database schema.
Because the integration tests are not mixed with the unit tests, I can still run the unit tests about 100 times a day without getting annoyed.
See http://www.autumnofagile.net and http://www.summerofnhibernate.com
Have you tried changing some of the defaults in the optional configuration properties? The slowdown is most likely related to certain optimizations nhibernate does with code generation.
http://nhibernate.info/doc/nh/en/index.html#configuration-optional
It seems like an in memory db is going to be the fastest way to test a your data layer. It also seems once you start testing your data layer you're moving a little beyond the realm of a unit test.
Related
thank you for reading my question.
I was just wondering about how shall i create unit tests for existing database layer. as of now my project has existing unit tests but no unit test is written for database layer or any function which inserts / updates / deletes data from database.
We are using Microsoft tests. One approach I think here is
1) We shall create database on the fly i.e. mdf file and we will keep our defaults values ready in it and in our setup method(Nunit) or initialize method(MS tests) we will mock the objects and dump the dummy data into tables.
Also we are not using any mocking framework. So i am all confuse.
i need to know how can we do this from the scratch. Also is there anything optional available for mocking framework.
Any pointers or samples would be highly appreciated.
Thank you again.
A C# unit test shall not touch the database, you should mock the database. It should be possible to execute many thousands of unit test on your local machine (without external (internet, databases, other application)) within seconds (and you want to run them when you build your code).
That leaves us kind of with your question unanswered: what should your database layer tests do? It depends on what kind of logic you have in that assembly! If you have "business or decision" logic should should test that, if you have mapping logic test that. If all your database layer does if using (whatever db framework) to put the load on you database then you might not have anything worth testing there.
If you want to test logic performed by your database (say SP's) you should do that in the database project, and most likely not using mstest.
Of course you can use mstest to setup and tear down database and perform test, but those test will not be unit tests.
Q1: When is it ideal to run unit test? Should it be ran before each time I go to debug the app? Should they be ran before I commit changes to svn? I think if an app only has a couple of unit test it should be ran each time the app is about to debug. But lets say we hundreds of unit test that can take a bit of time to complete, not sure if this is ideal or not. I think then it would be better to just run them before committing or deploying.
Q2: In my app Im using a repository pattern with a service layer. I've done some research on how to test a service when the service is calling a repository and the repository is querying db. So in order for it to be a true unit test and not an integration test, I have to find a way to test without touching the database. I found people are using Moq to mock their repository. Here's where I have a problem, to me it seems if I mock a repository then I'm changing the behavior of how the method is suppose to work and to me seems like a pointless unit test. It doesn't seem you are actually testing your code. Am I completley wrong about this? Thanks for any advice.
Let me take a shot.
A1: When you refactor existing code, you should execute the corresponding unit tests (not all) and see if anything is broken by your changes. For new functionality you should implement new unit tests in parallel using TDD. You should never execute all the unit tests by your own but should use or rely on continuous integration.
A2: I had a same opinion like you. But now, I am convinced that unit testing for service layer is required. Whatever that can be covered using unit testing should be covered. At this point, the core of your services might just be a delegation to repositories but services evolves. The services takes up the responsibility of parameter validation, authorization, logging, transactions, batch-support API etc. Then, it is not only data-access but many more things. If I were in your place, I would go for unit testing of services by mocking repositories. Sometimes, services provide convenient methods on top of the repository.
Hope it might be of some help to you.
A1. When making changes to your code the more often you run the unit tests the faster you will get feedback on whether the behavior they were written to assert has been affected, so the more often the better! Unit tests should be very fast and running several hundred should only take a couple of minutes at most, but it might be worth looking into infinitest (if working with java, i expect an alternative will exist for .net etc) it is a plugin to eclipse and automatically runs your unit tests when eclipse builds your project. It is clever enough to run only the tests that have been affected since the last time it ran, e.g. if you update a test, or if you update some "application" code that is covered by some unit tests the specific tests will be executed.
A2. Unit tests will cover many different scenarios that will call your services + daos many times, using "real" services will make it difficult to guarantee the results of each call (and setting up the data for each test can be painful), but also the results can be slow. It's usually better when unit testing to mock these services and testing them independently with integration tests.
I am just starting to get into unit testing and cant see an easy way to do a lot of test cases due to the interaction with a database.
Is there a standard method/process for unit testing where database access (read and write) is required in order to assert tests?
The best I can come up with so far is to have a config file used to bootstrap my app using a different db connection and then use the startup method to copy over the live db to a db used in isolation for tests?
Am I close? Or is there a better approach to this?
Your business logic shouldn't directly interact with the Database. Instead it should go through a data access layer that you can fake and mock in the context of unit testing. Look into mocking frameworks to do the mocking for you. Your tests should not depend on a database at all. Instead you should specify the data returned from your data access layer explicitly, and then ensure that your business logic behaves correctly with that information.
Testing that the program works with a DB attached is more of an integration test, and those have a lot of costs associated with them. They are slower (so it's harder to run them every time you compile), and more complicated (so they require more time and effort to maintain). If you can get simpler unit tests in place, I would recommend you do that first. Later you can add integration tests that might use the DB as well, but you'll get the most value from adding simpler unit tests first.
As far as unit-test go, I think whatever works for you in practice is the way to go. It's important that unit tests give you some value and improve the quality of your system and your ability to develop and maintain it.
I would suggest you probably don't want to be copying the live db over to your test db. There's probably no guarantees that your live database will contain suitable data that will cause your unit-tests to consistently run. The unit-tests should test that your code works, they shouldn't be testing that the live database happens to contain suitable data that causes them to pass, because as it's live, your users might change the content of it so that your tests fail.
You're unit test code itself should probably populate your test db with data required that simulates the scenarios you want to write unit tests for. I messed around with some ruby on rails code a few years ago; the test framework for that would have a test class which setup the db with some fake data, then multiple test methods from the class would be written to run against that data, and the tear-down method would wipe data from the database. So, different test-classes (or sometimes people call them fixtures) would run against a certain data setup, that meant you could run a number of tests against the same data setup instead of creating it for every test case you wanted to run. Setting up data for every test could end up causing your tests to run slowly, such that you get bored of waiting for them to run and stop bothering with them.
Just putting this one out for debate really.
I get unit testing. Sometimes feels time consuming but I'm all for the benefits.
I've an application set up that contains a repository layer and a service layer, using IoC, and I've been unit testing the methods.
Now I know the benefits of isolating my methods for unit testing so there is little or no dependency on other methods.
The question I've got is this. If I only ever access my repository methods through my service layer methods would only testing the service layers not be good enough? I'm testing against a test database.
Could it not be considered an extension of the idea that you only need to test your public methods? Maybe I'm just trying to skip some testing ;)
Yes, you should test your repository layer. Although the majority of these tests fall into a different classification of tests. I usually refer to them as integration tests to distinguish them from my unit tests. The difference being that there is an external dependency on a resource (your database) and that these tests will likely take much longer to run.
The primary reason for testing your repositories separately is that you'll be testing different things. The repository is responsible for handling translation and interaction with whatever persistence store you're using. The service layer, on the other hand, is responsible for coordinating your various respositories and other dependencies into functionality that represents business logic, which likely involves more than just a relay to a repository method and in some instances may involve multiple calls to multiple repositories.
First, to clarify the service layer testing - when testing the service layer, the repositories should be mocked so that they are isolated from what you're testing in the service layer. As you pointed out in your comment, this gives you a more granular level of testing and isolates the code under test. Your unit tests will also run much faster now because there are no database connections slowing them down.
Now, here are a few advantages of adding integration tests to your repositories...
It allows you to test out those pieces of code as you're writing them, a la TDD.
It ensures that whatever persistence language you're using (SQL, HQL, serialized objects, etc.) is formulated correctly for the operation you're attempting to perform.
If you're using an object-relational mapper, it ensures that your mappings are defined correctly.
In the future, you may find that you need to support another type of persistence. Depending on how your repository tests are structured, you may be able to reuse a large number of the tests to verify that the new database schema works correctly. For repository methods that implement database specific logic, obviously you'll have to create separate tests.
When coupled with Continuous Integration it's nice to have the repository tests separated. Integration tests, by nature take longer to run than unit tests. As such, they're usually run at less frequent intervals so that the immediate feedback available from running unit tests is not delayed.
Those are all advantages that I've seen in various projects that I've worked on. There may be more.
All that having been said, I will admit that I'm not as thorough with the repository integration tests as I am with unit tests. When it comes to testing an update on a particular object, for example, I'm usually content testing that one database column was successfully updated rather than creating a separate test for each individual column or a larger test that verifies every column in one test. For me, it depends on the complexity of the operation that the respository method is performing and whether there's any special condition that needs to be isolated.
You should test your repository layer. However if you have integration, story or system tests that cover it, then you can make a good case of not having unit tests as well.
Unit testing is great for complex stand-a-lone objects, but there is no point spending a long time writing unit tests for simple methods that are covered by “higher level” tests.
Wouldn't this depend on how how smart the repository access layer is? If your repository takes parameters to filter (Linq to SQL for example) the given result set surely this logic will need to be tested.
Unit tests: test an individual logic (a method) without worrying the dependency of that logic. Mostly falls in white box category.
Integration test: can test end to end flow or more than one layer together to ensure its correctness. Mostly falls in black box category.
In Dao most of the time there is no business logic, it just forms a query for a particular database implementation. So no need for a unit test if we already covered it in our integration test. Still, we can write unit tests for Dao if there is some logic in it.
As dao layers are so tightly coupled with database implementation, most of the time junit test for dao has become synonyms for testing of underlying databases.
The query we build can only be validated by the underlying Database engine.
I used to write unit tests (can call integration tests) for dao by using actual database or mocking a database with a compatible database(follow the same sql standard ,for example mysql engine can be replaced by sqlite or in memory H2 database) and inject this database in dao for testing the dao layer and query build in that dao layer.
I get unit testing
Next step is Test Driven Development (TDD). It will answer your question.
What is the best practice for testing an API that depends on data from the database?
What are the issues I need to watch out for in a "Continuous Integration" environment that runs Unit Tests as part of the build process? I mean would you deploy your database as part of the build scripts (may be run your installer) or should I go for hardcoded data [use MSTest Data Driven Unit Tests with XML]?
I understand I can mock the data layer for Business Logic layer but what if I had issues in my SQL statements in DAL? I do need to hit the database, right?
Well... that's a torrent of questions :)... Thoughts?
As far as possible you should mock out code to avoid hitting the database altogether, but it seems to me you're right about the need to test your SQL somewhere along the line. If you do write tests that hit the database, one key tip for avoiding headaches is to make sure that your setup gets the data into a known state, rather than relying on there already being suitable data available.
And of course, never test against your live database! But that goes without saying :)
As mentioned, use mocking to simulate DB calls in unit tests unless you want to fiddle with your tests and data endlessly. Testing sql statements implies more of an integration test. Run that separate from unit tests, they are 2 different beasts.
It's a good idea to automatically wipe the test database and then populate it with test harness data that will be assumed to be there for all of the tests that need to connect to the database. The database needs to be reset before each test for proper isolation - a failing test that puts in bad data could cause false failures on tests that follow and it gets messy if you have to run tests in a certain order for consistent results.
You can clear and populate the database with tools (DBUnit, DBUnit.NET, others) or just make your own utility classes to do the same thing.
As you said, other layers should be sufficiently decoupled from classes that actually hit the database, so the need for any kind of database being involved in testing is limited to tests run a small subset of your codebase. Your database accessing components can be mocked/stubbed for everything that depends on them.
One thing I did was create static methods that returned test data of a known state. I would then use a "fake" DAL to return this data as if I was actually calling the database. As for testing the sql/stored procedure, I tested it using SQL Management Studio. YMMV!