I'm using MSTest to retrofit unit testing in our application. A number of our tests will use a mocked list of objects (classes) that would normally come from our database. I'd like to save this list in the test project and then read the list, as necessary, for each test that needs it. Is there a best practice for doing this? BTW, these lists could contains hundreds or thousands of items.
For such cases, I would use data driven tests.
Look at the example at msdn site http://msdn.microsoft.com/en-us/library/ms182527.aspx
There are two solutions at least
Deserialize JSON to domain objects
Using library Json.Net you could deserialize text file contains JSON into hierarhy of objects
var hierarchy = JsonConvert.DeserializeObject<Root>(testJson)
See complete example here
Benefits
Easy to edit
Complex hierarhy supported
Test data could be stored in separate text files
Drawbacks
Could be quite complex to track changes at domain object with time
No schema supported
NDbUnit is a .NET library for managing database state during testing
In case if you involved ORM during your tests, you could load data using NdbUnit to SQLite from XML file that created according to database XSD
SQLiteConnection connection = boundSession.GetConnection();
using (var cmd = new SQLiteCommand("PRAGMA foreign_keys = OFF", connection))
{
cmd.ExecuteNonQuery();
}
var sqlLiteUnitTest = new SqlLiteDbUnitTest(connection);
sqlLiteUnitTest.ReadXmlSchema(xsdFile);
sqlLiteUnitTest.ReadXml(xmlDataFile);
sqlLiteUnitTest.PerformDbOperation(DbOperationFlag.CleanInsertIdentity);
using (var cmd = new SQLiteCommand("PRAGMA foreign_keys = ON", (SQLiteConnection)connection))
{
cmd.ExecuteNonQuery();
}
See complete example here
Benefits
Much easy to handle a lot of data, because of XSD
Suitable for complex integration test suites and supports differents SQL engines
Drawbacks
This is integration testing, not unittesting
XSD usually contains only objects from database
You need to keep XSD up to date, and be able to regenerate XML in case of XSD changes
Related
(This is similar to this question: How to set up separate test and development database in meteor, however it's 2 years old and meteor has changed a lot since then.)
I'm trying to create my own package and I want to run unit tests. I want to ensure that my queries are correct, so I want to actually run queries against a test database instead of just stubbing the functions.
I have two questions:
How can I tell Meteor to run against a test database instead of my real one?
What's the best way to easily populate that test database with data?
Ideally, I'd a setup step to clear than populate the test database, so I always know exactly what data is in each.
I'm a Tinytest novice (though I've used other unit test frameworks), so code samples are much appreciated.
Here's an example similar to what we use:
var resetCollection = function(name) {
var Collection = this[name];
if (Collection)
// if the collection is already defined, remove its documents
Collection.remove({});
else
// define a new unmanaged collection
this[name] = new Mongo.Collection(null);
};
reset = function() {
var collections = ['Comments', 'Posts'];
// reset all of the collections
_.each(collections(function(name) {resetCollection(name);}));
// insert some documents
var postId = Posts.insert({title: 'example post'});
Comments.insert({postId: postId, message: 'example comment'});
};
Tinytest.add('something', function(test) {
reset();
var post = Posts.findOne();
var comment = Comments.findOne();
return test.equal(comment.postId, post._id);
});
At the start of each test we call reset which cleans up the database and creates the necessary collections.
How can I tell Meteor to run against a test database instead of my real one?
When you test your packages, a separate database will be created for you. There is no need to manually specify your db path.
What's the best way to easily populate that test database with data?
The example above should give you some pointers. I've found the best way to avoid conflicts between packages is to use unmanaged collections in tests (name = null). The resetCollection function should correctly avoid redefining any managed collections which are exported by other packages. Also see this question for more details.
today I had a discussion with my colleguaes about how we should manage fixtures in our django application. We cound not find any solution that would satisfy everyone, so I'm asking this question here.
Suppose we have quite big django project with dozen of applications inside, each application has tests.py file with several TestClasses. Having this, how I should manage test data for all of these applications?
From my perpective, there is 2 different ways:
Store all data in separate for each application test_data.json file. This file will contain test data for all models defined in the application's models.py file, irrespective of where this data is used (it can be used in tests from different application)
Store some common data that would be probably required by all tests (like auth.users) in test_data.json and data for each TestCase in a separate test_case.json file.
From my perpective, second approach seems to be more cleaner, but I would like to know if somebody could tell me the concrete pros and cons of these approaches or may be suggest some other approach?
If you think about the cleanest way to define test data for your tests, I would like to recommend you read about django-any application:
django-any the explicit replacement for old-style, big and error-prone
implicit fixture files.
django-any allows to specify only fields important for test, and fill
rest by random with acceptable values.
It makes tests clean and easy to undestood, without reading fixture
files.
from django_any import any_model, WithTestDataSeed
class TestMyShop(TestCase):
def test_order_updates_user_account(self):
account = any_model(Account, amount=25, user__is_active=True)
order = any_model(Order, user=account.user, amount=10)
order.proceed()
account = Account.objects.get(pk=account.pk)
self.assertEquals(15, account.amount)
The same approach available for forms also (django_any.any_form)
This solution is helpful for avoiding to keep extra data in you DB while your tests are executing.
I'm trying to write unit tests for parts of my Node app. I'm using Mongoose for my ORM.
I've searched a bunch for how to do testing with Mongoose and Node but not come with anything. The solutions/frameworks all seem to be full-stack or make no mention of mocking stuff.
Is there a way I can mock my Mongoose DB so I can return static data in my tests? I'd rather not have to set up a test DB and fill it with data for every unit test.
Has anyone else encountered this?
I too went looking for answers, and ended up here. This is what I did:
I started off using mockery to mock out the module that my models were in. An then creating my own mock module with each model hanging off it as a property. These properties wrapped the real models (so that child properties exist for the code under test). And then I override the methods I want to manipulate for the test like save. This had the advantage of mockery being able to undo the mocking.
but...
I don't really care enough about undoing the mocking to write wrapper properties for every model. So now I just require my module and override the functions I want to manipulate. I will probably run tests in separate processes if it becomes an issue.
In the arrange part of my tests:
// mock out database saves
var db = require("../../schema");
db.Model1.prototype.save = function(callback) {
console.log("in the mock");
callback();
};
db.Model2.prototype.save = function(callback) {
console.log("in the mock");
callback("mock staged an error for testing purposes");
};
I solved this by structuring my code a little. I'm keeping all my mongoose-related stuff in separate classes with APIs like "save", "find", "delete" and no other class does direct access to the database. Then I simply mock those in tests that rely on data.
I did something similar with the actual objects that are returned. For every model I have in mongoose, I have a corresponding class that wraps it and provides access-methods to fields. Those are also easily mocked.
Also worth mentioning:
mockgoose - In-memory DB that mocks Mongoose, for testing purposes.
monckoose - Similar, but takes a different approach (Implements a fake driver). Monckoose seems to be unpublished as of March 2015.
Here is a description of the scenario and I would appreciate also any comments on the approach used
The core of my application is a set of web services backed by a P2P database. One service accepts a simple XML-based record (I have designed a generic schema for it). The service processes this data (mainly creating keys based on certain criteria) and pass the original data along with the created keys to a listening SocketServer in one of the listening P2P nodes. This key,data pair is routed to the proper node, which stores the data (associated with the key as an ID) in an XML database.
A second service accepts a query document that is structured based on the same schema, but with optional values that would be used for searching and matching from the previously stored ones. So the second service would pass this query (with the proper keys) to the P2P part, get back the results and pass them back to the service client.
E.g. if the original record submitted to the first service was < attr1 >value1 < /attr1 > < attr2 > value2 < / attr2 > (attribute list along with some other metadata mandated by the schema) then the second service should retrieve that record if the query received was < attr2 >value2 < / attr2 >
(I could later think about using more complex XPath or XQuery queries as the underlying XML database allows instead of exact matches for values here but that is not important at this stage. there is also a third service I am working on but it depends on getting the first two in proper shape first)
So my questions are:
1) What data type should I use as the parameters of the web services? How to utilize my schema for this usage? I was considering various XML binding frameworks (especially JAXB and SDO) for this but didn't know how to proceed.
2) How can I enhance the two services (call them store and search) to use dynamically created templates based on the original generic schema? The service would still accept documents of the main schema type but has the inner attribute list based on a template say template1 only requires whose values are ints while template2 require (float) and (string). The current JSP-based prototype manually creates this template but as an XML document that is assembled by hand (<>tags dispersed in text) and there is no type checking what so ever so I thought I could do better!
3) Is it possible to generate a quick web app prototype for simple access to this system (again by using the schema (&templates) to edit the appropriate XML message structures? What I am looking for is for the (human) user to choose a template and then just "fill in the blanks" and submit, no need for any fancy look and feel.
4) Can I or how can I also use this XML message type for communicating across sockets?
5) Does it matter if I deploy the services as stateless EJBs or not? Do I need them to be EJBs or servlets would be more than enough?
I currently have a rudimentary implementation (from previous developers) that were meant for a subset of my current requirements (I am improving on the services and adding new derived ones) but there was no schema nor validation and the data is passed all along as basic strings, thus providing weak typing and difficult to update manual parsing. The reason I want to update this to a stronger bound typing is to introduce changes in the data schema that would be passed along the whole system easily. Basically I want the system to be as less coupled to the data format/schema used as possible; the current prototype is too coupled to the data that I am finding it extremely difficult to change the data without breaking the system.
My initial investigation led me to consider JAXB but it supports only static typing (cannot create a schema/types dynamically at runtime that I want to persist for later usage). So I came across SDO which has both dynamic and static typing. The problem is just that there is not enough community and/or examples of using this approach so it seems risky (the examples of Apache Tuscany and Eclipselink implementations are very scarce and I could not find complete examples that are not 5+ years old (like this http://www.ibm.com/developerworks/java/library/j-sdo/) and also tackles the XML use case of SDO (most seem to focus on the relational usage of SDO).
This is my first time asking for programming help (here and elsewhere) so please bear with me. I searched a lot on the net but I could not find anything useful but pieces here and there that did not add up.
Any comment or hint is really appreciated.
trfndr
EDIT
I forgot one thing: how would the search service get back the results? Since it is opening a client socket connection, there is no way to get back any results synchronously. The current implementation tackles this by having the service client opening a listening socket on a random port and putting this contact info in the query document. After the search web service sends the query to the p2p part it finishes. The p2p sends the results as a WS call to another service which sends them back to the service client socket. I don't like this approach much, is there any more elegant solution?
I lead the EclipseLink JAXB & SDO implementations and represent Oracle on those specifications so hopefully I can help you out. This question is very similar to talk I'm giving at JavaOne in September.
1) What data type should I use as the
parameters of the web services? How to
utilize my schema for this usage? I
was considering various XML binding
frameworks (especially JAXB and SDO)
for this but didn't know how to
proceed.
This depend's on what web service framework you are using. JAXB is much easier to use with JAX-WS, and while JAXB is still easier to use with JAX-RS SDO, is a possible alternative.
2) How can I enhance the two services
(call them store and search) to use
dynamically created templates based on
the original generic schema? The
service would still accept documents
of the main schema type but has the
inner attribute list based on a
template say template1 only requires
whose values are ints while template2
require (float) and (string). The
current JSP-based prototype manually
creates this template but as an XML
document that is assembled by hand
(<>tags dispersed in text) and there
is no type checking what so ever so I
thought I could do better!
I'm not 100% what you mean here, but the following may be helpful:
Using #XmlAnyElement to Build a Generic Message
3) Is it possible to generate a quick
web app prototype for simple access to
this system (again by using the schema
(&templates) to edit the appropriate
XML message structures? What I am
looking for is for the (human) user to
choose a template and then just "fill
in the blanks" and submit, no need for
any fancy look and feel.
JAX-RS is a nice framework for creating quick prototypes. Below is an example I created:
Part 1 - The Database
Part 2 - Mapping the Database to Objects
Part 3 - Mapping the Objects to XML
Part 4 - The RESTful Service
Part 5 - The Client
4) Can I or how can I also use this
XML message type for communicating
across sockets?
I prefer frameworks like JAX-RS that communicate over the HTTP protocol.
5) Does it matter if I deploy the
services as stateless EJBs or not? Do
I need them to be EJBs or servlets
would be more than enough?
My preference is to use an EJB session bean for the service. If you are interacting with a database then you can leverage the Java Transaction API (JTA) to manage your database transactions.
Part 4 - The RESTful Service
SDO
EclipseLink is the SDO 2.1.1 (JSR-235) reference implementation. We have some examples posted below. If you are looking how to do something specific, I will try to post a relevant example.
http://wiki.eclipse.org/EclipseLink/Examples/SDO
JAXB
JAXB is static. It is also more popular than SDO. Recognizing this in EclipseLink we have implemented a dynamic JAXB feature. It gives you the dynamic aspect of SDO with a JAXB slant.
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/Dynamic
EDIT #1
Since you are dealing with JAX-WS and your model is almost entirely dynamic, I think you should skip the JAXB binding altogether. In the following link see the section "Switching Off Data Binding"
http://java.sun.com/developer/technicalArticles/xml/jaxrpcpatterns3/
This will give us the body of the message as a javax.xml.transform.Source object. We will need to process the XML based on the dynamic templates. SDO would be a good choice here. You can constantly add new types to the HelperContext using XML schemas.
helperContext.getXSDHelper().define(schema1, null);
helperContext.getXSDHelper().define(schema2, null);
You wil be able to unmarshal the Source from the web service as follows:
XMLDocument doc = helperContext.getXMLHelper().load(source, null, null);
DataObject rootDataObject = doc.getRootObject();
String someValue = rootDataObject.getString("attr3/childAttr/anotherChildAttr");
You will also be able to use the XMLHelper to marshal your objects to XML when calling another service.
We currently have an application that retrieves data from the server through a web service and populates a DataSet. Then the users of the API manipulate it through the objects which in turn change the dataset. The changes are then serialized, compressed and sent back to the server to get updated.
However, I have begin using NHibernate within projects and I really like the disconnected nature of the POCO objects. The problem we have now is that our objects are so tied to the internal DataSet that they cannot be used in many situations and we end up making duplicate POCO objects to pass back and forth.
Batch.GetBatch() -> calls to web server and populates an internal dataset
Batch.SaveBatch() -> send changes to web server from dataset
Is there a way to achieve a similar model that we are using which all database access occurs through a web service but use NHibernate?
Edit 1
I have a partial solution that is working and persisting through a web service but it has two problems.
I have to serialize and send my whole collection and not just changed items
If I try to repopulate the collection upon return my objects then any references I had are lost.
Here is my example solution.
Client Side
public IList<Job> GetAll()
{
return coreWebService
.GetJobs()
.BinaryDeserialize<IList<Job>>();
}
public IList<Job> Save(IList<Job> Jobs)
{
return coreWebService
.Save(Jobs.BinarySerialize())
.BinaryDeserialize<IList<Job>>();
}
Server Side
[WebMethod]
public byte[] GetJobs()
{
using (ISession session = NHibernateHelper.OpenSession())
{
return (from j in session.Linq<Job>()
select j).ToList().BinarySerialize();
}
}
[WebMethod]
public byte[] Save(byte[] JobBytes)
{
var Jobs = JobBytes.BinaryDeserialize<IList<Job>>();
using (ISession session = NHibernateHelper.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
foreach (var job in Jobs)
{
session.SaveOrUpdate(job);
}
transaction.Commit();
}
return Jobs.BinarySerialize();
}
As you can see I am sending the whole collection to the server each time and then returning the whole collection. But I'm getting a replaced collection instead of a merged/updated collection. Not to mention the fact that it seems highly inefficient to send all the data back and forth when only part of it could be changed.
Edit 2
I have seen several references on the web for almost a transparent persistent mechanism. I'm not exactly sure if these will work and most of them look highly experimental.
ADO.NET Data Services w/NHibernate (Ayende)
ADO.NET Data Services w/NHibernate (Wildermuth)
Custom Lazy-loadable Business Collections with NHibernate
NHibernate and WCF is Not a Perfect Match
Spring.NET, NHibernate, WCF Services and Lazy Initialization
How to use NHibernate Lazy Initializing Proxies with Web Services or WCF
I'm having a hard time finding a replacement for the DataSet model we are using today. The reason I want to get away from that model is because it takes a lot of work to tie every property of every class to a row/cell of a dataset. Then it also tightly couples all of my classes together.
I've only taken a cursory look at your question, so forgive me if my response is shortsighted but here goes:
I don't think you can logically get away from doing a mapping from domain object to DTO.
By using the domain objects over the wire you are tightly coupling your client and service, part of the reason to have a service in the first place is to promote loose coupling. So that's an immediate issue.
On top of that you're going to end up with a brittle domain logic interface where you can't make changes on the service side without breaking your client.
I suspect your best bet would be to implement a loosely coupled service which implements a REST / or some other loosely coupled interface. You could use a product such as automapper to make the conversions simpler and easier and also flatten data structures as necessary.
At this point I don't know of any way to really cut down the verbosity involved in doing the interface layers but having worked on large projects that didn't make the effort I can honestly tell you the savings wasn't worth it.
I think your issue revolves around this issue:
http://thatextramile.be/blog/2010/05/why-you-shouldnt-expose-your-entities-through-your-services/
Are you or are you not going to send ORM-Entities over the wire?
Since you have a Services-Oriented architecture.. I (like the author) do not recommend this practice.
I use NHibernate. I call those ORM-Entities. They are THE POCO model. But they have "virtual" properties that allow for lazy-loading.
However, I also have some DTO-Objects. These are also POCO's. These do not have lazy'loading friendly properties.
So I do alot of "converting". I hydrate ORM-Entities (with NHibernate)...and then I end up converting them to Domain-DTO-Objects. Yes, it stinks in the beginning.
The server sends out the Domain-DTO-Objects's. There is NO lazy loading. I have to populate them with the "Goldie Locks" "just right" model. Aka, if I need Parent(s) with one level of children, I have to know that up front and send the Domain-DTO-Objects over that way, with just the right amount of hydration.
WHen I send back Domain-DTO-Objects's (from client to the server), I have to reverse the process. I convert the Domain-DTO-Objects into ORM-Entities. And allow NHibernate to work with the ORM-Entities.
Because the architecture is "disconnected", I do alot of (NHiberntae) ".Merge()" calls.
// ormItem is any NHibernate poco
using (ISession session = ISessionCreator.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
session.BeginTransaction();
ParkingAreaNHEntity mergedItem = session.Merge(ormItem);
transaction.Commit();
}
}
.Merge is a wonderful thing. Entity Framework does not have it. Boo.
Is this alot of setup? Yes.
Do I think it is perfect? No.
However. Because I send very basic DTO's(Poco's) that are not "flavored" to the ORM, I have the ability to switch ORM's without killing my contracts to the outside world.
My datalayer can be ADO.NET, EF, NHibernate, or anything. I have to write the "Converters" if I switch, and the ORM code, but everything else is isolated.
Many people argue with me. They said I'm doing too much, and the ORM-Entities are fine.
Again, I like to "now allow any lazy loading" appearances. And I prefer to have my data-layer isolated. My clients should not know or care about my data-layer/orm of choice.
There are just enough subtle differences (or some not so subtle ones) between EF and NHibernate to screwball the game plan.
Do my Domain-DTO-Objects's look 95% like my ORM-Entities? Yep. But its the 5% that can screwball you.
Moving from DataSets, especially if they are populated from stored-procedures with alot of biz-logic in the TSQL, isn't trivial. But now that I do object model, and I NEVER write a stored procedure that isn't simple CRUD functions, I'd never go back.
And I hate maintenance projects with voodoo TSQL in the stored procedures. It ain't 1999 anymore. Well, most places.
Good luck.
PS Without .Merge(in EF), here is what you have to do in a disconnected world: (boo microsoft)
http://www.entityframeworktutorial.net/EntityFramework4.3/update-many-to-many-entity-using-dbcontext.aspx