Unit Testing - database and fixtures - unit-testing

I am just starting to get into unit testing and cant see an easy way to do a lot of test cases due to the interaction with a database.
Is there a standard method/process for unit testing where database access (read and write) is required in order to assert tests?
The best I can come up with so far is to have a config file used to bootstrap my app using a different db connection and then use the startup method to copy over the live db to a db used in isolation for tests?
Am I close? Or is there a better approach to this?

Your business logic shouldn't directly interact with the Database. Instead it should go through a data access layer that you can fake and mock in the context of unit testing. Look into mocking frameworks to do the mocking for you. Your tests should not depend on a database at all. Instead you should specify the data returned from your data access layer explicitly, and then ensure that your business logic behaves correctly with that information.
Testing that the program works with a DB attached is more of an integration test, and those have a lot of costs associated with them. They are slower (so it's harder to run them every time you compile), and more complicated (so they require more time and effort to maintain). If you can get simpler unit tests in place, I would recommend you do that first. Later you can add integration tests that might use the DB as well, but you'll get the most value from adding simpler unit tests first.

As far as unit-test go, I think whatever works for you in practice is the way to go. It's important that unit tests give you some value and improve the quality of your system and your ability to develop and maintain it.
I would suggest you probably don't want to be copying the live db over to your test db. There's probably no guarantees that your live database will contain suitable data that will cause your unit-tests to consistently run. The unit-tests should test that your code works, they shouldn't be testing that the live database happens to contain suitable data that causes them to pass, because as it's live, your users might change the content of it so that your tests fail.
You're unit test code itself should probably populate your test db with data required that simulates the scenarios you want to write unit tests for. I messed around with some ruby on rails code a few years ago; the test framework for that would have a test class which setup the db with some fake data, then multiple test methods from the class would be written to run against that data, and the tear-down method would wipe data from the database. So, different test-classes (or sometimes people call them fixtures) would run against a certain data setup, that meant you could run a number of tests against the same data setup instead of creating it for every test case you wanted to run. Setting up data for every test could end up causing your tests to run slowly, such that you get bored of waiting for them to run and stop bothering with them.

Related

How is unit testing testing anything?

I don't understand how I'm testing anything with unit testing.
Suppose I am testing that my repository class can retrieve values from the database correctly. The proper way to do this would be to actually call the real database and retrieve and check those values.
But the idea behind unit testing is that it should be done in isolation, and connecting to a running database is not isolation. So what is usually done is to mock or stub the database.
But why would testing on a fake database with hardcoded data and hardcoded return values even test anything? It seems tautological and a waste of time.
Or am I not understanding how to unit test properly?
Does one even unit test database calls?
I don't understand how I'm testing anything with unit testing.
Short answer: you are testing the logic, and leaving out the side effects.
You aren't testing everything; but you are testing something.
Furthermore, if you keep in mind that you aren't really testing the code with side effects, then you are motivated to arrange your code so that the pieces that actually depend on the side effect are small. The big pieces don't actually care where the data comes from, to those are easy to test.
So "something" can be "most things".
There is an impedance problem -- if your test doubles impersonate the production originals inadequately, then some of your test results will be inaccurate.
my philosophy is to test as little as possible to reach a given level of confidence
Kent Beck, 2008
One way of imagining "as little as possible" is to think in terms of cost -- we're aiming for a given confidence level, so we want to achieve as much of that confidence as we can using cheap unit tests, and then make up the difference with more expensive techniques.
Cory Benfield's talk Building Protocol Libraries the Right Way describes an example of the kind of separation we're talking about here. The logic of how to parse an HTTP message is separable from the problem of reading the bytes. If you make the complicated part easy to test, and the hard to test part too simple to fail, your chances of succeeding are quite good.
I think your concern is valid. For me, TDD is more of an evolutionary design practice than unit testing practice, but I'll save that for another discussion.
In your example, what we are really testing is that the logic contained within your individual classes is sound. By stubbing the data coming from the database you have a controlled scenario that you can ensure your code works for that particular scenario. This makes it much easier to ensure full test coverage for all data scenarios. You're correct that this really doesn't test the whole system end to end, but the point is to reduce the overall test maintenance costs and enable faster feedback.
My approach is to mock most collaborators at the unit test level, then write acceptance tests at the integration test level, which validates your system using real data. Because the unit tests with their mocked data allows you to test various data scenarios out, you only need to test a few of those scenarios using integration tests to feel confident that your code will perform as you expect.
You can test your code against actual database in isolation. Just create new database instance for every test, or execute tests synchronously one after another and clean database before next test.
But using actual database will make your tests slow, which will slow down your work, because you want quick feedback on what you are doing.
Do not test every class - test main feature logic, which can use many different classes and mock/stub only dependencies which makes tests slow.
Find your application boundaries and tests logic between them without mocking.
For example in trivial web api application boundaries can be:
- controller action -> request(input)
- controller action -> response(output)
- database -> side effect of received request.
Assume we live in perfect world where new database and web server setup will takes milliseconds. Then you will tests whole pipeline of your application:
1. Configure database for test
2. Send request to the web api server
3. Assert that response contains expected data
4. Assert that database state changed as expected
But in now days world your boundaries will be controller action and abstracted database access point. Which makes your test look like below:
1. Configure mocked database access point(repository)
2. Call controller action with given parameters
3. Assert that action returns expected result
4. Possibly assert that mocked repository received expected update arguments.
If your application have no logic, just read/update data from database - test with actual database or, if your database framework allows it, use database in-memory.

Testing Real Repositories

I've set up unit tests that test a fake repository and tests that make use of a fake repository.
But what about testing the real repository that hits the database ? If this is left to integration tests then it would be seem that it isn't tested directly and problems could be missed.
Am I missing something here?
Well, the integration tests would only test the literal persistence or retrieval of data to and from the layer of persistence. If your repository is doing any kind of logic concerning that data (validation, throwing exceptions if an object isn't found, etc.), that can be unit tested by faking what the persistence layer returns (whether it returns the queried object, a return code, or something else). Your integration test will assure you that the code can physically persist/retrieve data from persistence, and that's it. Any sort of logic to test ought to belong in a unit test.
Sometimes, however, logic could exist in the persistence layer itself (e.g. stored procedures). This could be for the sake of efficiency, or it could merely be legacy code. This is harder to properly unit test, as you can only access the logic by getting to the database. In this scenario, it'd probably be best to try and move the logic to your code base as much as possible, so that it can be tested more easily. There probably exist unit testing frameworks for scenarios such as these, but I'm not aware of them (merely out of inexperience).
Can you set up a real repository that tests against a fake database?
Regardless of what you do, this is integration testing, not unit testing.
I'd definitely suggest integration tests against the DAL, within reason.
We don't use the Repository pattern per se (to our chagrin), but our policy for similar classes (Searchers) is as follows:
If the method does a simple retrieve from the database using an O/RM call, don't test it.
If the method uses query-building features of the O/RM, test it.
If the method contains a string (such as a column name), test it.
If the method calls a stored procedure, test it.
If the method contains logic, test it. But try to avoid logic.
If the method bypasses the O/RM and uses raw SQL, test it. But really try to avoid this.
The gist is you should know your O/RM works (and hopefully has tests), so there's no reason to test basic CRUD behavior.
You'll definitely want a "test deck" - an in-memory database, a local file-backed database that can be checked into source control, or (if you have to) a shared database. Some testing frameworks offer rollback facilities to restore the database state; just be careful if you're hitting multiple databases in the same test or (in some cases) if you have embedded transactions.
EDIT: Note that these integration tests will still test your repository in "isolation" (save for the database). All your other unit tests will use a fake repository.
I recently covered a very similar question over here.
In summary: test your concrete Repository implementations if there's value in doing so. If you are doing something complex in your implementation, it is probably a good idea to test it. If you are using an ORM with no custom logic, there may not be much value in writing tests at that level.

How do I do TDD efficiently with NHibernate?

It seems to me that most people write their tests against in-memory, in-process databases like SQLite when working with NHibernate. I have this up and running but my first test (that uses NHibernate) always takes between 3-4 seconds to execute. The next test runs much faster.
I am using FluentNhibernate to do the mapping but get roughly the same timings with XML mapping files. For me the 3-4 second delay seriously disrupts my flow.
What is the recomended way of working with TDD and NHibernate?
Is it possible to mock ISession to unit test the actual queries or can this only be done with in memory databases?
I am using the Repository Pattern to perform Database operations, and whenever I run my Tests I just run the higher-level tests that simply Mock the Repository (with RhinoMocks).
I have a seperate suite of tests that explicitly tests the Repository layer and the NHibernate mappings. And those usually don't change as much as the business and gui logic above them.
That way I get very fast UnitTests that never hit the DB, and still a well tested DB Layer
Unit testing data access is not possible, but you can integration test it.
I create integration test for my data access in a seperate project from my unit tests. I only run the (slow) integration tests when I change something in the repositories, mapping or database schema.
Because the integration tests are not mixed with the unit tests, I can still run the unit tests about 100 times a day without getting annoyed.
See http://www.autumnofagile.net and http://www.summerofnhibernate.com
Have you tried changing some of the defaults in the optional configuration properties? The slowdown is most likely related to certain optimizations nhibernate does with code generation.
http://nhibernate.info/doc/nh/en/index.html#configuration-optional
It seems like an in memory db is going to be the fastest way to test a your data layer. It also seems once you start testing your data layer you're moving a little beyond the realm of a unit test.

Data Driven Unit Tests

What is the best practice for testing an API that depends on data from the database?
What are the issues I need to watch out for in a "Continuous Integration" environment that runs Unit Tests as part of the build process? I mean would you deploy your database as part of the build scripts (may be run your installer) or should I go for hardcoded data [use MSTest Data Driven Unit Tests with XML]?
I understand I can mock the data layer for Business Logic layer but what if I had issues in my SQL statements in DAL? I do need to hit the database, right?
Well... that's a torrent of questions :)... Thoughts?
As far as possible you should mock out code to avoid hitting the database altogether, but it seems to me you're right about the need to test your SQL somewhere along the line. If you do write tests that hit the database, one key tip for avoiding headaches is to make sure that your setup gets the data into a known state, rather than relying on there already being suitable data available.
And of course, never test against your live database! But that goes without saying :)
As mentioned, use mocking to simulate DB calls in unit tests unless you want to fiddle with your tests and data endlessly. Testing sql statements implies more of an integration test. Run that separate from unit tests, they are 2 different beasts.
It's a good idea to automatically wipe the test database and then populate it with test harness data that will be assumed to be there for all of the tests that need to connect to the database. The database needs to be reset before each test for proper isolation - a failing test that puts in bad data could cause false failures on tests that follow and it gets messy if you have to run tests in a certain order for consistent results.
You can clear and populate the database with tools (DBUnit, DBUnit.NET, others) or just make your own utility classes to do the same thing.
As you said, other layers should be sufficiently decoupled from classes that actually hit the database, so the need for any kind of database being involved in testing is limited to tests run a small subset of your codebase. Your database accessing components can be mocked/stubbed for everything that depends on them.
One thing I did was create static methods that returned test data of a known state. I would then use a "fake" DAL to return this data as if I was actually calling the database. As for testing the sql/stored procedure, I tested it using SQL Management Studio. YMMV!

How do you unit test business applications?

How are people unit testing their business applications? I've seen a lot of examples of unit testing with "simple to test" examples. Ex. a calculator. How are people unit testing data-heavy applications? How are you putting together your sample data? In many cases, data for one test may not work at all for another test which makes it hard to just have one test database?
Testing the data access portion of the code is fairly straightforward. It's testing out all the methods that work against the data that seem to be hard to test. For example, imagine a posting process where there is heavy data access to determine what is posted, numbers are adjusted, etc. There are a number of interim steps that occur (and need to be tested) along with tests afterwards that ensure the posting was successful. Some of those steps may actually be stored procedures.
In the past I've tried inserting the test data in a test database, then running the test, but honestly it's pretty painful to write this kind of code (and error prone). I've also tried just building a test database up front and rolling back the changes. That works OK but in a number of places you can't easily do this either (and many people would say that's integration testing; so be it, I still need to be able to test this somehow).
If the answer is that there isn't a nice way of handling this and it currently just sort of sucks, that would be useful to know as well.
Any thoughts, ideas, suggestions, or tips are appreciated.
My automated functional tests usually follow one of two patters:
Database Connected Tests
Mock Persistence Layer Tests
Database Connected Tests
When I have automated tests that are connected to the database, I usually make a single test database template that has enough data for all the tests. When the automated tests are run, a new test database is generated from the template for every test. The test database has to be constantly re-generated because test will often change the data. As tests are added, I usually append more data to the test database template.
There are some nice advantages to this testing method. The obvious advantage is that the tests also exercise your schema. Another advantage is that after setting up the initial tests, most new tests will be able to re-use the existing test data. This makes it easy to add more tests.
The downside is that the test database will become unwieldy. Because data will usually be added one test at time, it will be inconsistent and maybe even unrealistic. You will also end up cursing the person who setup the test database when there is a significant database schema change (which for me usually means I end up cursing myself).
This style of testing obviously doesn't work if you can't generate new test databases at will.
Mock Persistence Layer Tests
For this pattern, you create mock objects that live with the test cases. These mock objects intercept the calls to the database so that you can programmatically provide the appropriate results. Basically, when the code you're testing calls the findCustomerByName() method, your mock object is called instead of the persistence layer.
The nice thing about using mock object tests is that you can get very specific. Often times, there are execution paths that you simply can't reach in automated tests w/o mock objects. They also free you from maintaining a large, monolithic set of test data.
Another benefit is the lack of external dependencies. Because the mock objects simulate the persistence layer, your tests are no longer dependent on the database. This is often the deciding factor when choosing which pattern to choose. Mock objects seem to get more traction when dealing with legacy database systems or databases with stringent licensing terms.
The downside of mock objects is that they often result in a lot of extra test code. This isn't horrible because almost any amount of testing code is cheap when amortized over the number of times you run the test, but it can be annoying to have more test code then production code.
I have to second the comment by #Phil Bennett as I try to approach these integration tests with a rollback solution.
I have a very detailed post about integration testing your data access layer here
I show not only the sample data access class, base class, and sample DB transaction fixture class, but a full CRUD integration test w/ sample data shown. With this approach you don't need multiple test databases as you can control the data going in with each test and after the test is complete the transactions are all rolledback so your DB is clean.
About unit testing business logic inside your app, I would also second the comments by #Phil and #Mark because if you mock out all the dependencies your business object has, it becomes very simple to test your application logic one entity at a time ;)
Edit: So are you looking for one huge integration test that will verify everything from logic pre-data base / stored procedure run w/ logic and finally a verification on the way back? If so you could break this out into 2 steps:
1 - Unit test the logic that happens before the data is pushed
into your data access code. For
example, if you have some code that
calculates some numbers based on
some properties -- write a test that
only checks to see if the logic for
this 1 function does what you asked
it to do. Mock out any dependancy
on the data access class so you can
ignore it for this test of the
application logic alone.
2 - Integration test the logic that happens once you take your
manipulated data (from the previous
method we unit tested) and call the
appropriate stored procedure. Do
this inside a data specific testing
class so you can rollback after it's
completed. After your stored
procedure has run, do a query
against the database to get your
object now that we have done some
logic against the data and verify it
has the values you expected
(post-stored procedure logic /etc )
If you need an entry in your database for the stored procedure to run, simply insert that data before you run the sproc that has your logic inside it. For example, if you have a product that you need to test, it might require a supplier and category entry to insert so before you insert your product do a quick and dirty insert for a supplier and category so your product insert works as planned.
It depends on what you're testing. If you're testing a business logic component -- then its immaterial where the data is coming from and you'd probably use a mock or a hand rolled stub class that simulates the data access routine the component would have called in the wild. The only time I mess with the data access is when I'm actually testing the data access components themselves.
Even then I tend to open a DB transaction in the TestFixtureSetUp method (obviously this depends on what unit testing framework you might be using) and rollback the transaction at the end of the test suite TestFixtureTeardown.
Mocking Frameworks enable you to test your business objects.
Data Driven tests often end up becoming more of a intergration test than a unit test, they also carry with them the burden of managing the state of a data store pre and post execution of the test and the time taken in connecting and executing queries.
In general i would avoid doing unit tests that touch the database from your business objects. As for Testing your database you need a different stratergy.
That being said you can never totally get away from data driven testing only limiting the amout of tests that actually need to invoke your back end systems.
It sounds like you might be testing message based systems, or systems with highly parameterised interfaces, where there are large numbers of permutations of input data.
In general all the rules of standard unti testing still hold:
Try to make the units being tested as small and discrete as possible.
Try to make tests independant.
Factor code to decouple dependencies.
Use mocks and stubs to replace dependencies (like dataaccess)
Once this is done you will have removed a lot of the complexity from the tests, hopefully revealing good sets of unit tests, and simplifying the sample data.
A good methodology for then compiling sample data for test that still require complex input data is Orthogonal testing, or see here.
I've used that sort of method for generating test plans for WCF and BizTalk solutions where the permutations of input messages can create multiple possible execution paths.
For lots of different runs over the same logic but with different data you can use CSV, as many columns as you like for the input and the last for the output etc.