I have lagecy ASP.Net code which accesses database. There is data access layer which forms sqlcommands and executes on the database.
What is the best way to unit test the data access layer? Should we actually connect to database and execute test case or just use fakes?
Is it a good idea to use shim (described in below post)?
http://msdn.microsoft.com/en-us/library/hh549176.aspx
Assume your legacy DLL is managed, you should be able to use Fakes feature in VS2012. Fakes is really meant for doing this. A typical usage of Fakes works like:
Create a new unit test project
Add a reference to this legacy DLLs (e.g. Legacy.DLL). Make sure all the dependent DLLs are referenced in this unit test projects.
Right click Legacy.DLL in the solution Reference folder, choose "Add Fakes Assembly". This generates shims for types defined in Legacy.DLL.
Also add a reference to your project code (Assume you want to unit test your product method)
In the TestMethod1, you can start shimming method defined in Legacy.DLL and test your product code.
You can also find useful info on http://msdn.microsoft.com/en-us/library/hh708916.aspx
The best way to test a data access layer is to write integration tests that actually connect to the database. It is NOT a good idea to use fakes (whether it's Microsoft Fakes or any other test isolation framework). Doing so would prevent you from verifying the query logic in your data access layer, which is why you'd want to test it in the first place.
With granular integration tests hitting a local SQL database via the shared memory protocol, you can easily execute hundreds of tests per minute. However, each test must be responsible for creating its own test environment (i.e. test records in tables it accesses) and cleaning it up in order to allow reliable test execution. If your data access layer does not manage transactions explicitly, start by using TransactionScope to automatically roll back all changes at the end of each test. This is the simplest and the best option, however if that does not work (if your legacy code manages transactions internally), try deleting data left by previous tests in the beginning of each test. Alternatively, you can ensure that tests don't affect each other by always using new and unique primary keys for all records in every test. This way you can cleanup the test database once per batch instaead of once per test and improve performance.
Related
With code targeting the full .net Framework I could mock up an IDbConnection and point it at a mocked DataSet in order to test that my queries are executing correctly. Similarly if I were using EntityFramework 6 I could have a mocked DbSet return IQueryables and test my data layer logic against that.
However .net core doesn't support DataSets (though that may change in the future?).
In the meantime, is there a way to create a collection of objects which dapper can query using an IDbConnection in order to test the query logic?
No, all dapper is, are extension methods on top of the IDbConnection class.
There is no InMemory implementation for this (IDbConnection) (that understands SQL strings).
Your best bet however, if you want to run it completely autonomous, would be to spin up a new sql server for each time you run unit tests. This can easily be done with the docker image that Microsoft has made for sqlserver: https://hub.docker.com/r/microsoft/mssql-server-linux/
or...
Or migrate to Entity framework, they allow you to unit test against an in-memory backing store.
why?
Dapper just contains some useful features to generate SQL. It by no means abstracts away from SQL. And sql is just plain text for C# code. it does not parse it, nor execute it. Thus you cant unit test your sql/dapper code without using a database behind it.
Entity framework does it differently. it tries to make, everything that you would want to do in a database into C# code/abstraction (eg the IDbCollection). Then they make 1 implementation that generates sql code and one implementation that uses in-memory backing store. this way you can unit test your code.
Microsofts solution
Microsoft often advertises using the Repository Pattern. This is basically an expensive word for abstracting all your database calls/commands into a separate class and interfacing these classes, and use the interfaces everywhere in code (using dependency injection). Now you can write unit tests that test all your code expect for the sql queries, for this interface you make a mock to test if the method is actually called.
Another option to test you database access code (queries etc.) is use a local SQL database instance but instead recreate it every time you can start a database transaction as part of your unit-test setup and rollback the transaction in tear down. Depending on the isolation level you have chosen this also addresses concurrency issues when tests / fixtures are executed in parallel.
thank you for reading my question.
I was just wondering about how shall i create unit tests for existing database layer. as of now my project has existing unit tests but no unit test is written for database layer or any function which inserts / updates / deletes data from database.
We are using Microsoft tests. One approach I think here is
1) We shall create database on the fly i.e. mdf file and we will keep our defaults values ready in it and in our setup method(Nunit) or initialize method(MS tests) we will mock the objects and dump the dummy data into tables.
Also we are not using any mocking framework. So i am all confuse.
i need to know how can we do this from the scratch. Also is there anything optional available for mocking framework.
Any pointers or samples would be highly appreciated.
Thank you again.
A C# unit test shall not touch the database, you should mock the database. It should be possible to execute many thousands of unit test on your local machine (without external (internet, databases, other application)) within seconds (and you want to run them when you build your code).
That leaves us kind of with your question unanswered: what should your database layer tests do? It depends on what kind of logic you have in that assembly! If you have "business or decision" logic should should test that, if you have mapping logic test that. If all your database layer does if using (whatever db framework) to put the load on you database then you might not have anything worth testing there.
If you want to test logic performed by your database (say SP's) you should do that in the database project, and most likely not using mstest.
Of course you can use mstest to setup and tear down database and perform test, but those test will not be unit tests.
Dynamics AX 2012 comes with unit testing support.
To have meaningful tests some test data needs to be provided (stored in tables in the database).
To get a reproducable outcome of the unit tests we need to have the same data stored in the tables every time the tests are run. Now the question is, how can we accomplish this?
I learned that there is the possibility of setting the isolation level for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create an empty company and delete the company after the tests have been run. In the setup() method I can fill my testdata into the tables with insert statements. This works fine for small scenarios but becomes cumbersome very fast if you have a real life project.
I was wondering if there is anyone out there with a practical solution of how to use the X++ Unit Test Framework in a real world scenario. Any input is very much appreciated.
I agree that creating test data in a new and empty company only works for fairly trivial scenarios or scenarios where you implemented the whole data structure yourself. But as soon as existing data structures are needed, this approach can become very time consuming.
One approach that worked well for me in the past is to run unit tests in a existing company that already has most of the configuration data (e.g. financial setup, inventory setup, ...) needed to run the test. The test itself runs in a ttsBegin - ttsAbort block so that the unit test does not actually create any data.
Another approach is to implement data provider methods that are test agnostic, but create data that is often used in unit tests (e.g. a method that creates a product). It takes some time to create a useful set of data provider methods, but once they exist, writing unit tests becomes a lot faster. See SysTest part V.: Test execution (results, runners and listeners) on how Microsoft uses a similar approach (or at least they used to back in 2007 for AX 4.0).
Both approaches can also be combined, you would call the data provider methods inside the ttsBegin - ttsAbort block to create the needed data only for the unit test.
Another useful method is to use doInsert or doUpdate to create your test data, especially if you are only interested in a few fields and do not need to create a completely valid record.
I think that the unit test framework was an afterthought. In order to really use it, Microsoft would have needed to provide unit test classes, then when you customize their code, you also customize their unit tests.
So without that, you're essentially left coding unit tests that try and encompass base code along with your modifications, which is a huge task.
Where I think you can actually use it is around isolated customizations that perform some function, and aren't heavily built on base code. And also with customizations that are integrations with external systems.
Well, from my point of view, you will not be able to leverage more than what you pointed from the standard framework.
What you can do is more around release management. You can setup an integration environment with the targeted data and push your nightbuild model into this environmnet at the end of the build process and then run your tests.
Yes, it will need more effort to set it up and to maintain but it's the only solution I've seen untill now to have a large and consistent set of data to run unit or integration tests on.
To have meaningful tests some test data needs to be provided (stored
in tables in the database).
As someone else already indicated - I found it best to leverage an existing company for data. In my case, several existing companies.
To get a reproducable outcome of the unit tests we need to have the
same data stored in the tables every time the tests are run. Now the
question is, how can we accomplish this?
We have built test helpers, that help us "run the test", automating what a person would do - give you have architeced your application to be testable. In essence our test class uses the helpers to run the test, then provides most of the value in validating the data it created.
I learned that there is the possibility of setting the isolation level
for the TestSuite to SysTestSuiteCompanyIsolateClass. This will create
an empty company and delete the company after the tests have been run.
In the setup() method I can fill my testdata into the tables with
insert statements. This works fine for small scenarios but becomes
cumbersome very fast if you have a real life project.
I did not find this practical in our situation, so we haven't leveraged it.
I was wondering if there is anyone out there with a practical solution
of how to use the X++ Unit Test Framework in a real world scenario.
Any input is very much appreciated.
We've been using the testing framework as stated above and it has been working for us. the key is to find the correct scenarios to test, also provides a good foundation for writing testable classes.
We´re building a product that allows users to create custom databases and store data within those DBs (WebApp).
Our issue for testing of the frontend (coffeescript) is that every test should be atomic but that would require setting up a DB for seeing if an item within that DB can be created and persists or to see how changes in a DB affect items.
Essentially, the issue is that the setup code needed to get to the item tests basically sets up a new DB and therefore equals the code that tests setting up a new DB.
There are two approaches and we´re torn on which to use:
1) Create and tear down a new DB with each group of tests
(+) Sorta Atomic (still fails if setting up a DB fails)
(-) Takes a lot of time to execute
(-) Tons of surounding code
(-) No way to explore the created environment
(-) Messy on errors, everything fails
2) Do the setup step by step as seperate tests depending on each other, cleanup routine at beginning of a test
(+) The created environment can be accessed via the UI (not automatically torn down)
(+) Step by step testing, less overall/repetitive code
(-) Tests depended on each other (messy)
(-) Somewhat overall messy
We´re wondering therefore if the golden rule that tests should be atomic makes sense in such a dynamic environment?
Basically, what you are talking about is Integration tests. These are different from Unit Tests. Examples of integration test would be Automated UI tests or Coded UI tests. In most of the projects I've worked on we've had both types of tests and I strongly encourage you to have both types in your project too.
The philosophy behind both these tests is slightly different.
Unit Tests are meant to test isolated bits of functionality.
They are meant to be very fast.
A developer should be able to run them all on their machine in a reasonable amount of time.
There are various consequences of this philosophy.
Because unit test is testing an isolated bit of functionality, you should use mocks and stubs to isolate the rest of the environment and only focus on tiny bits of functionality.
The isolation helps your "design thinking" while writing these tests. In fact this is the reason why the unit tests are required to be fast, because a developer is actively and constantly changing the code and unit tests as part of the design and redesign process. There should be very low overhead to set up, change and run the unit tests. I should be able to ignore everything other than the problem I am trying to solve and quickly iterate and reiterate my designs and tests. This is the idea behind TDD and its claim to help write good testable code. If you are spending a long time trying to set up an overly complex unit test then you have to start reconsidering your design.
The fast nature means that you could run these as part of your Continuous Integration build.
The disadvantage is that because you are testing each functionality in isolation you don't know if they will all work together as a whole. Each time you write a mock, you are implicitly baking in an assumption about how the rest of the system works and that the rest of the system is currently working as it is meant to (i.e nothing else is broken as part of your deployment or running or patching of the OS etc.)
Integration Tests are meant to test the functionality from end to end. You try NOT to mock out or isolate any part of the system.
There are again various consequence of this philosophy. Note that there is no requirement for integration tests to be fast.
Integration tests, by their very nature need to run after your full deployment (as opposed to unit tests which can be run as soon as your code compiles).
Because they take longer, you don't run them as part of your CI environment, but you still need to run them regularly. We usually run them as part of our nightly builds. Or you can run it twice daily etc.
Because the integration tests take a black box approach to the whole system, it doesn't really help you with you "design thinking" about how to actually build the system. But it does help your thinking about the specifications of the system as a whole. i.e What the system should do, not how it should do something.
Note that in both cases the rule of tests being atomic still applies. Each test is different from other tests. This way when a test fails you can be sure about all the conditions that are causing it to fail and concentrate on only fixing that. It's just that an integration test touches as many parts your system as possible.
To give you an example on our current project.
Lets say we need to write a bit of functionality that requires us to add a new table to the DB and bring it through all the layers to show it in the UI.
We start by creating our business logic classes, domain classes, write the appropriate web service, build view models, modify the database etc. While doing each of these we write unit tests to test the code we are currently writing. So when building the business logic classes, we mock out everything else to ensure that the logic in the class is valid (for example, clients over 60 years old get a 50% discount on their car insurance etc.)
Once we do that, we now need to update our deployment scripts / packages etc. to be able to deploy it. i.e update the database creation SQL scripts and the database alteration SQL scripts etc. (In your case this will be complex process).
Now we write integration tests. In this case we might test from SQL Server to Web Service. There is a SQL Integration test base class which contains the set up and tear down method for each test. In the set up we create a brand new database using our sql deployment scripts. Each test also specifies a test data sql script. So for example this test data script might insert a new record into the client table whose age is 70 years. We run this script as part of the "Arrange" of our test. Then make a web service call to search for clients older than 60. This is the "Act" part of the test and from the result, we check to make sure that we only get back the user we've inserted into the DB. At the end of the test, the database is deleted. We've caught bugs here when the columns in SQL database aren't nullable or the datetime columns overflow because the default minimum datetime in .Net is a different size from SQL server's minimum datetime.
Some functionality requires us to interact with an Oracle database. For example, if a new record is added to Oracle, then a trigger/db procedure kicks off and transfers that record to SQL and then we need to bring it up the layers. In this case we have an OracleSQL integration test base class. As you might have guessed, this follows a simliar pattern, but creates both Oracle and SQL dbs inserts test data into Oracle and blows them both away at the end of the test.
The developers usually pick the Web service layer for writing their integration tests. The testers on the other hand use UI automation tools to make sure that the data is actually showing up on screen. For example they will record a test that goes to web page, clicks search button, puts "60" into the age box, clicks the search button etc. That test might leverages the same test data sql script that inserts test data that the developer wrote (or the testing team might come to the developer and ask help crafting sql scripts to insert whatever highly convoluted data they can think of). But the point is, once the test data insertion script is created, it leverages the same underlying system to blow away the whole db, create a new one, insert test data, and run the specified test.
So, to answer your question, you will need two types of tests, unit tests and integration tests. You might have to put in some initial work into creating some base classes or helper methods to create/delete databases, automating your deployment to install/uninstall other components of your system etc. You will have to do this for your final deployment anyway. Integration tests will also be closely related to and dependent on your deployment strategy. This is an advantage and not a disadvantage in my opinion. While it might be painful at first to set it all up, one of the things your integration tests are implicitly testing is your deployment mechanism. If there are any issues with deploying/installing any of the components required by your system, you want to know about it as quickly as possible. Not the day before you are supposed to be deploying to production.
A good suite of tests is invaluable. It also needs to be isolated, rigorous and comprehensive. The tests shouldn't fail when they don't need to but more importantly, they should fail when they need to. And when they do fail, you want them to provide as much information as possible and point you at the exact location of failure. This makes fixing the issue a much easier task. Any time you put into building this test suite will more than pay for itself in no time.
You're not doing atomic tests if you're talking to a database.
You need to mock the database interface and talk to the mock instead. That will be fast, and you'll be able to use the mock to introduce errors that would be difficult using the real database.
What is the best practice for testing an API that depends on data from the database?
What are the issues I need to watch out for in a "Continuous Integration" environment that runs Unit Tests as part of the build process? I mean would you deploy your database as part of the build scripts (may be run your installer) or should I go for hardcoded data [use MSTest Data Driven Unit Tests with XML]?
I understand I can mock the data layer for Business Logic layer but what if I had issues in my SQL statements in DAL? I do need to hit the database, right?
Well... that's a torrent of questions :)... Thoughts?
As far as possible you should mock out code to avoid hitting the database altogether, but it seems to me you're right about the need to test your SQL somewhere along the line. If you do write tests that hit the database, one key tip for avoiding headaches is to make sure that your setup gets the data into a known state, rather than relying on there already being suitable data available.
And of course, never test against your live database! But that goes without saying :)
As mentioned, use mocking to simulate DB calls in unit tests unless you want to fiddle with your tests and data endlessly. Testing sql statements implies more of an integration test. Run that separate from unit tests, they are 2 different beasts.
It's a good idea to automatically wipe the test database and then populate it with test harness data that will be assumed to be there for all of the tests that need to connect to the database. The database needs to be reset before each test for proper isolation - a failing test that puts in bad data could cause false failures on tests that follow and it gets messy if you have to run tests in a certain order for consistent results.
You can clear and populate the database with tools (DBUnit, DBUnit.NET, others) or just make your own utility classes to do the same thing.
As you said, other layers should be sufficiently decoupled from classes that actually hit the database, so the need for any kind of database being involved in testing is limited to tests run a small subset of your codebase. Your database accessing components can be mocked/stubbed for everything that depends on them.
One thing I did was create static methods that returned test data of a known state. I would then use a "fake" DAL to return this data as if I was actually calling the database. As for testing the sql/stored procedure, I tested it using SQL Management Studio. YMMV!