MyBatis - How to unit test result maps? - unit-testing

The mybatis documentation states:
Always build ResultMaps incrementally. Unit tests really help out here. If you try to build a gigantic resultMap like the one above all at once, it's likely you'll get it wrong and it will be hard to work with. Start simple, and evolve it a step at a time. And unit test! The downside to using frameworks is that they are sometimes a bit of a black box (open source or not). Your best bet to ensure that you're achieving the behaviour that you intend, is to write unit tests. It also helps to have them when submitting bugs.
However at no point in the documentation do they explain how to unit test resultMaps. On their github wiki they have a page on unit testing but it seems to be more geared towards contributors rather than users. How do I write unit tests for result maps as they recommend when it seems like I would need to build a fully functional in memory database just to test mappings?
Needing an in memory db sounds more like an integration test than a unit test and if any of my queries use db specific SQL statements like SQL Server T-SQL statements it couldn't properly test on an in memory db of a different type. Am I misunderstanding something?

Your understanding is correct.
What is called unit test on the wiki is actually an integration test with the in-memory database. You can try to provide a mocked DataSource which returns mocked Connection and so on down to ResultSet but this is not very practical. Integration test is a better fit here.
I would say the main point is not about unit-test (which should have been called just tests in this context) but the incremental part. mybatis error messages in mappers are sometimes cryptic so using short (TDD like) feedback loop helps to deal with it. If you run tests for the single mapper you are working on edit-run cycle can be rather short.

Related

How is unit testing testing anything?

I don't understand how I'm testing anything with unit testing.
Suppose I am testing that my repository class can retrieve values from the database correctly. The proper way to do this would be to actually call the real database and retrieve and check those values.
But the idea behind unit testing is that it should be done in isolation, and connecting to a running database is not isolation. So what is usually done is to mock or stub the database.
But why would testing on a fake database with hardcoded data and hardcoded return values even test anything? It seems tautological and a waste of time.
Or am I not understanding how to unit test properly?
Does one even unit test database calls?
I don't understand how I'm testing anything with unit testing.
Short answer: you are testing the logic, and leaving out the side effects.
You aren't testing everything; but you are testing something.
Furthermore, if you keep in mind that you aren't really testing the code with side effects, then you are motivated to arrange your code so that the pieces that actually depend on the side effect are small. The big pieces don't actually care where the data comes from, to those are easy to test.
So "something" can be "most things".
There is an impedance problem -- if your test doubles impersonate the production originals inadequately, then some of your test results will be inaccurate.
my philosophy is to test as little as possible to reach a given level of confidence
Kent Beck, 2008
One way of imagining "as little as possible" is to think in terms of cost -- we're aiming for a given confidence level, so we want to achieve as much of that confidence as we can using cheap unit tests, and then make up the difference with more expensive techniques.
Cory Benfield's talk Building Protocol Libraries the Right Way describes an example of the kind of separation we're talking about here. The logic of how to parse an HTTP message is separable from the problem of reading the bytes. If you make the complicated part easy to test, and the hard to test part too simple to fail, your chances of succeeding are quite good.
I think your concern is valid. For me, TDD is more of an evolutionary design practice than unit testing practice, but I'll save that for another discussion.
In your example, what we are really testing is that the logic contained within your individual classes is sound. By stubbing the data coming from the database you have a controlled scenario that you can ensure your code works for that particular scenario. This makes it much easier to ensure full test coverage for all data scenarios. You're correct that this really doesn't test the whole system end to end, but the point is to reduce the overall test maintenance costs and enable faster feedback.
My approach is to mock most collaborators at the unit test level, then write acceptance tests at the integration test level, which validates your system using real data. Because the unit tests with their mocked data allows you to test various data scenarios out, you only need to test a few of those scenarios using integration tests to feel confident that your code will perform as you expect.
You can test your code against actual database in isolation. Just create new database instance for every test, or execute tests synchronously one after another and clean database before next test.
But using actual database will make your tests slow, which will slow down your work, because you want quick feedback on what you are doing.
Do not test every class - test main feature logic, which can use many different classes and mock/stub only dependencies which makes tests slow.
Find your application boundaries and tests logic between them without mocking.
For example in trivial web api application boundaries can be:
- controller action -> request(input)
- controller action -> response(output)
- database -> side effect of received request.
Assume we live in perfect world where new database and web server setup will takes milliseconds. Then you will tests whole pipeline of your application:
1. Configure database for test
2. Send request to the web api server
3. Assert that response contains expected data
4. Assert that database state changed as expected
But in now days world your boundaries will be controller action and abstracted database access point. Which makes your test look like below:
1. Configure mocked database access point(repository)
2. Call controller action with given parameters
3. Assert that action returns expected result
4. Possibly assert that mocked repository received expected update arguments.
If your application have no logic, just read/update data from database - test with actual database or, if your database framework allows it, use database in-memory.

How to do unit testing in Microsoft Dynamics AX 2012 in a real world project

Dynamics AX 2012 comes with unit testing support.
To have meaningful tests some test data needs to be provided (stored in tables in the database).
To get a reproducable outcome of the unit tests we need to have the same data stored in the tables every time the tests are run. Now the question is, how can we accomplish this?
I learned that there is the possibility of setting the isolation level for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create an empty company and delete the company after the tests have been run. In the setup() method I can fill my testdata into the tables with insert statements. This works fine for small scenarios but becomes cumbersome very fast if you have a real life project.
I was wondering if there is anyone out there with a practical solution of how to use the X++ Unit Test Framework in a real world scenario. Any input is very much appreciated.
I agree that creating test data in a new and empty company only works for fairly trivial scenarios or scenarios where you implemented the whole data structure yourself. But as soon as existing data structures are needed, this approach can become very time consuming.
One approach that worked well for me in the past is to run unit tests in a existing company that already has most of the configuration data (e.g. financial setup, inventory setup, ...) needed to run the test. The test itself runs in a ttsBegin - ttsAbort block so that the unit test does not actually create any data.
Another approach is to implement data provider methods that are test agnostic, but create data that is often used in unit tests (e.g. a method that creates a product). It takes some time to create a useful set of data provider methods, but once they exist, writing unit tests becomes a lot faster. See SysTest part V.: Test execution (results, runners and listeners) on how Microsoft uses a similar approach (or at least they used to back in 2007 for AX 4.0).
Both approaches can also be combined, you would call the data provider methods inside the ttsBegin - ttsAbort block to create the needed data only for the unit test.
Another useful method is to use doInsert or doUpdate to create your test data, especially if you are only interested in a few fields and do not need to create a completely valid record.
I think that the unit test framework was an afterthought. In order to really use it, Microsoft would have needed to provide unit test classes, then when you customize their code, you also customize their unit tests.
So without that, you're essentially left coding unit tests that try and encompass base code along with your modifications, which is a huge task.
Where I think you can actually use it is around isolated customizations that perform some function, and aren't heavily built on base code. And also with customizations that are integrations with external systems.
Well, from my point of view, you will not be able to leverage more than what you pointed from the standard framework.
What you can do is more around release management. You can setup an integration environment with the targeted data and push your nightbuild model into this environmnet at the end of the build process and then run your tests.
Yes, it will need more effort to set it up and to maintain but it's the only solution I've seen untill now to have a large and consistent set of data to run unit or integration tests on.
To have meaningful tests some test data needs to be provided (stored
in tables in the database).
As someone else already indicated - I found it best to leverage an existing company for data. In my case, several existing companies.
To get a reproducable outcome of the unit tests we need to have the
same data stored in the tables every time the tests are run. Now the
question is, how can we accomplish this?
We have built test helpers, that help us "run the test", automating what a person would do - give you have architeced your application to be testable. In essence our test class uses the helpers to run the test, then provides most of the value in validating the data it created.
I learned that there is the possibility of setting the isolation level
for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create
an empty company and delete the company after the tests have been run.
In the setup() method I can fill my testdata into the tables with
insert statements. This works fine for small scenarios but becomes
cumbersome very fast if you have a real life project.
I did not find this practical in our situation, so we haven't leveraged it.
I was wondering if there is anyone out there with a practical solution
of how to use the X++ Unit Test Framework in a real world scenario.
Any input is very much appreciated.
We've been using the testing framework as stated above and it has been working for us. the key is to find the correct scenarios to test, also provides a good foundation for writing testable classes.

Should I Unit Test Data Access Layer? Is this a good practice and how to do it?

If I'm having Data Access Layer (nHibernate) for example a class called UserProvider
and a Business Logic class UserBl, should I both test their methods SaveUser or GetUserById, or any other public method in DA layer which is called from BL layer. Is this a redundancy or a common practice to do?
Is it common to unit test DA layer, or that belongs to Integration test domain?
Is it better to have test database, or create database data during test?
Any help is appreciated.
There's no right answer to this, it really depends. Some people (e.g Roy Osherove) say you should only test code which has conditional logic (IF statements etc), which may or may not include your DAL. Some people (often those doing TDD) will say you should test everything, including the DAL, and aim for 100% code coverage.
Personally I only test it if it has logic in, so end up with some DAL methods tested and some not. Most of the time you just end up checking that your BL calls your DAL, which has some merit but I don't find necessary. I think it makes more sense to have integration tests which cover the app end-to-end, including the database, which covers things like GetUserById.
Either way, and you probably know this already, but make sure your unit tests don't touch an actual database. (No problem doing this, but that's an integration test not a unit test, as it takes a lot longer and involves complex setup, and should be run separately).
It is a good practice to write unit test for every layer, even the DAL.
I don't think running tests on the real db is a good idea, you might ruin important data. We used to set up a copy of the db for tests with just enough data in it to run tests on.
In our test project we had a special web.config file with test settings, like a ConnectionString to our test db.
In my experience it was helpful to test each layer on its own. Integrating it and test again. Integration test normally does not test all aspects. Sometimes if the Data Access Layer (I don't know nHibernate) is generated code or sort of generic code it looks like overkill. But I have seen it more than once that systematic testing pays off.
Is it redundancy? In my opinion it is not.
Is it common practice? Hard to tell. I would say no. I have seen it in some projects but not in all projects I worked in. Was often dependend on time/resources and mentality of the team / individiual developer.
Is it better to have test database, or create database data during test? This is quite a different question. Cannot be answered easily. Depends on your project. Create a new one is good but sometimes throws up unreal bugs (although bugs). It is depending on your project (product development or a proprietary development). Usually in an proprietary on site development a database gets migrated into from somewhere. So a second test is definitely needed with the migrated data. But this is rather at a system test level.
Unit testing the DAL is worth it as mentioned if there is logic in there, for example if using the same StoredProc for insert & update its worth knowing that an insert works, a subsequent call updates the previous and a select returns it and not a list. In your case SaveUser method probably inserts first time around and subsequently updates, its nice to know that this is whats being done at unit test stage.
If you're using a framework like iBatis or Hibernate where you can implement typehandlers its worth confirming that the handlers handle values in a way that's acceptable to your underlying DB.
As for testing against an actual DB if you use a framework like Spring you can avail of the supported database unit test classes with auto rollback of transactions so you run your tests and the DB is unaffected afterwards. See here for information. Others probably offer similiar support.

Testing Real Repositories

I've set up unit tests that test a fake repository and tests that make use of a fake repository.
But what about testing the real repository that hits the database ? If this is left to integration tests then it would be seem that it isn't tested directly and problems could be missed.
Am I missing something here?
Well, the integration tests would only test the literal persistence or retrieval of data to and from the layer of persistence. If your repository is doing any kind of logic concerning that data (validation, throwing exceptions if an object isn't found, etc.), that can be unit tested by faking what the persistence layer returns (whether it returns the queried object, a return code, or something else). Your integration test will assure you that the code can physically persist/retrieve data from persistence, and that's it. Any sort of logic to test ought to belong in a unit test.
Sometimes, however, logic could exist in the persistence layer itself (e.g. stored procedures). This could be for the sake of efficiency, or it could merely be legacy code. This is harder to properly unit test, as you can only access the logic by getting to the database. In this scenario, it'd probably be best to try and move the logic to your code base as much as possible, so that it can be tested more easily. There probably exist unit testing frameworks for scenarios such as these, but I'm not aware of them (merely out of inexperience).
Can you set up a real repository that tests against a fake database?
Regardless of what you do, this is integration testing, not unit testing.
I'd definitely suggest integration tests against the DAL, within reason.
We don't use the Repository pattern per se (to our chagrin), but our policy for similar classes (Searchers) is as follows:
If the method does a simple retrieve from the database using an O/RM call, don't test it.
If the method uses query-building features of the O/RM, test it.
If the method contains a string (such as a column name), test it.
If the method calls a stored procedure, test it.
If the method contains logic, test it. But try to avoid logic.
If the method bypasses the O/RM and uses raw SQL, test it. But really try to avoid this.
The gist is you should know your O/RM works (and hopefully has tests), so there's no reason to test basic CRUD behavior.
You'll definitely want a "test deck" - an in-memory database, a local file-backed database that can be checked into source control, or (if you have to) a shared database. Some testing frameworks offer rollback facilities to restore the database state; just be careful if you're hitting multiple databases in the same test or (in some cases) if you have embedded transactions.
EDIT: Note that these integration tests will still test your repository in "isolation" (save for the database). All your other unit tests will use a fake repository.
I recently covered a very similar question over here.
In summary: test your concrete Repository implementations if there's value in doing so. If you are doing something complex in your implementation, it is probably a good idea to test it. If you are using an ORM with no custom logic, there may not be much value in writing tests at that level.

When unit testing, do you have to use a database to test CRUD operations?

When unit testing, is it a must to use a database when testing CRUD operations?
Can sql lite help with this? Do you have to cre-create the db somehow in memory?
I am using mbunit.
No. Integrating an actual DB would be integration testing. Not unit testing.
Yes you could use any in-memory DB like SQLite or MS SQL Compact for this if you can't abstract (mock) your DAL/DAO in any other way.
With this in mind I have to point out, that unit testing is possible all the way to DAL, but not DAL itself. DAL will have to be tested with some sort of an actual DB in integration testing.
As with all complicated question, the answer is: It depends :)
In general you should hide your data access layer behind an interface so that you can test the rest of the application without using a database, but what if you would like to test the data access implementation itself?
In some cases, some people consider this redundant since they mostly use declarative data access technologies such as ORMs.
In other cases, the data access component itself may contain some logic that you may want to test. That can be an entirely relevant thing to do, but you will need the database to do that.
Some people consider this to be Integration Tests instead of Unit Tests, but in my book, it doesn't matter too much what you call it - the most important thing is that you get value out of your fully automated tests, and you can definitely use a unit testing framework to drive those tests.
A while back I wrote about how to do this on SQL Server. The most important thing to keep in mind is to avoid the temptation to create a General Fixture with some 'representative data' and attempt to reuse this across all tests. Instead, you should fill in data as part of each test and clean it up after.
When unit testing, is it a must to use a database when testing CRUD operations?
Assuming for a moment that you have extracted interfaces round said CRUD operations and have tested everything that uses said interface via mocks or stubs. You are now left with a chunk of code that is a save method containing a bit of code to bind objects and some SQL.
If so then I would declare that a "Unit" and say you do need a database, and ideally one that is at least a good representation of your database, lest you be caught out with vender specific SQL.
I'd also make light use of mocks in order to force error conditions, but I would not test the save method itself with just mocks. So while technically this may be an integration test I'd still do it as part of my unit tests.
Edit: Missed 2/3s of your question. Sorry.
Can sql lite help with this?
I have in the past used in memory databases and have been bitten as either the database I used and the live system did something different or they took quite some time to start up. I would recommend that every developer have a developer local database anyway.
Do you have to cre-create the db somehow in memory?
In the database yes. I use DbUnit to splatter data and manually keep the schema up to date with SQL scripts but you could use just SQL scripts. Having a developer local database does add some additional maintenance as you have both the schema and the datasets to keep up to data but personally I find is worth while as you can be sure that database layer is working as expected.
As others already pointed out, what you are trying to achieve isn't unit testing but integration testing.
Having that said, and even if I prefer unit testing in isolation with mocks, there is nothing really wrong with integration testing. So if you think it makes sense in your context, just include integration testing in your testing strategy.
Now, regarding your question, I'd check out DbUnit.NET. I don't know the .NET version of this tool but I can tell you that the Java version is great for tests interacting with a database. In a few words, DbUnit allows you to put the database in a known state before a test is run and to perform assert on the content of tables. Really handy. BTW, I'd recommend reading the Best Practices page, even if you decide to not use this tool.
Really, if you are writing a test that connects to a database, you are doing integration testing, not unit testing.
For unit testing such operations, consider using some typed of mock-database object. For instance, if you have a class that encapsulates your database interaction, extract an interface from it and then create an inheriting class that uses simple in-memory objects instead of actually connecting to the database.
As mentioned above, the key here is to have your test database in a known state before the tests are run. In one real-world example, I have a couple of SQL scripts that are run prior to the tests that recreate a known set of test data. From this, I can test CRUD operations and verify that the new row(s) are inserted/updated/deleted.
I wrote a utility called DBSnapshot to help integration test sqlserver databases.
If your database schema is changing frequently it will be helpful to actually test your code against a real db instance. People use SqlLite for speedy tests (because the database runs in memory), but this isn't helpful when you want to verify that your code works against an actual build of your database.
When testing your database you want to follow a pattern similar to: backup the database, setup the database for a test, exercise the code, verify results, restore database to the starting state.
The above will ensure that you can run each test in isolation. My DBSnapshot utility will simplify your code if your writing it .net. I think its easier to use than DbUnit.NET.