I'm working on a project which I'm really not sure how to unit test. It's an unobtrusive tag based framework for wiring up events between models, views and delegates in a GUI system.
Basically you have one large json file which is used describes all of the events, event handlers and bindings. The user creates their models, views and delegates all of which have no knowledge of the framework. The JSON file is passed to an init() methods, then the framework creates all of the instances needed and takes care of all the bindings, listeners etc.
The problems I have are two fold:
1) There is basically only a single public method in the framework, everything else is communicated through the mark-up in the JSON file. Therefore I have a very small testing surface for what is a large and complicated application.
2) One of the big roles of the application is to instantiate class's if they haven't been instantiated previously and cached. This means that I need real classes in my test code, simple mocks aren't going to cut it.
At the moment I'm considering a couple if solutions. The first is start testing the private methods. The second is to just stub the constructors.
Any one else have any ideas?
1) There is basically only a single
public method in the framework,
everything else is communicated
through the mark-up in the JSON file.
Therefore I have a very small testing
surface for what is a large and
complicated application.
How is that possible? is this entire complicated framework stored in one class? if there are several classes involved, how do they share information without public methods?
A constructor is a public method too, by the way.
Are you just passing around the JSON object? that would couple your framework to the information source too tightly. You should have one class parsing the JSON, and the rest communicating without knowledge of the data source (via testable public methods).
list the features (scenarios, use-cases, whatever you want to call them) of the system, and establish JSON data/frameworks for each feature. These are your unit tests.
Related
I implemented an api controller that will be used by part of another system, rather than users directly. However, I want to provide a unit test for it. I started looking at MOQ and then I realized my particular case is a little more complex. The code works, but as I said, Im trying to make a test for it, without (ideally) writing any data to the Db.
The structure of the classes look like this
api controller
|__MyCustomClass (injected via startup along with configuration)
|__UtilityClass (method: ImportSomeDataFromaFolder)
|__MydataRepositoryClass
|__CustomDerivedDbContext
(override savechanges etc so as to capture EF errors)
Note:
- The return value of the api method is a complex JSON object.
- Id like to have a test that avoids actually writing to the Db
- I am creating a custom DbContext (CustomDerivedContext) and overriding savechanges, so as to capture EF entities that change via in a list, eg. List<EntityEntry>
- The method ImportSomeDataFromaFolder, after parsing the data into POCO objects and sending them to the Repository for persisting to the Db, then moves the file to a different folder. When testing, i'd rather this didnt happen, but rather just load a file to parse.
There are 3 primary things to test:
(1) Does the data in the file get loaded into POCO objects
(2) Do the POCO objects get translated correctly to EF model entities
(3) Does the api return a JSON object that contains the expected results
Or, am I making things more complicated than what should be done for a unit test. I want to write a test against the api controller, but its the CustomDerivedDbContext that seems I want to use a fake here, since I could then remove the step that actually calls the underlying DbContext savechanges.
Sounds like you have tight coupling to implementation concerns that make unit testing your subject in isolation difficult.
That should be seen as a code smell and an indication that the current design choice need to be reviewed and refactored where possible.
If unit testing the API controller then ideally all you should need is a mock of the explicitly injected abstractions.
The API controller need not know anything about the dependencies of its dependencies if proper separation of concerns are followed.
In our SmartGWT web application, we pass our domain objects from the server to the client and back (via GWT serialization). To show/edit the data at client side in a DynamicForm or a GridList, we have to convert it into a Record (or ListGridRecord) and after editing back into our domain objects.
I would like to write a unit test for this conversion method, but a straightforward attempt in JUnit fails, since the record's getAttribute and setAttribute methods are implemented by JSOHelper.getAttribute/JSOHelper.setAttribute, which are static methods declared as native and implemented by JSNI in JavaScript, thus only usable on the client side, when compiled to JavaScript.
We get an UnsatisfiedLinkError when using these methods from JUnit, as the native methods are not implemented there.
Any ideas how I could test these classes?
These critical methods could be easily implemented by a simple HashMap (or maybe a LinkedHashMap, if the attribute order is important) - actually a JavaScript object is about that, if only looking at the data part, not the methods. Thus I just think about providing an alternative implementation of some selected SmartGWT classes (mainly JSOHelper) with Java implementations instead of the JavaScript ones.
But am I really the first one who has this problem? Am I simply too stupid to find the existing solution?
If you have used a MVP or MVC pattern in your code, just mock the view code with something like mockito and test all the rest of the application. To test the view code you will need to use something like Selinium I don't think gwtTestCase would work with smartGWT since it is just a gwt wrapper around js code.
In the project there are two data sources: one is project's own database, another is (semi-)legacy web service. The problem is that admin part has to keep them in sync and manage both so that user doesn't have to know they're separate (or, do know, but they do not care).
Here's an example: there's list of languages. Both apps - project and legacy - need to use them. However, they both add their own meaning. For example, project may need active/inactive, and legacy will need language code.
But admin part has to manage everything - language name, active/inactive, language code. When loading, data from both systems has to be merged and presented, and when saved, data has to be updated in both systems.
Thus, what's the best way to represent this separated data (to be used in the admin page)? Notice that I use ASP.NET MVC / NHibernate.
How do I manage legacy data?
Do I connect admin part to legacy web service external interface - where it currently only has GetXXX() methods - and add the missed C[R]UD methods?
Or, do I connect directly to legacy database - which is possible since I do control it.
Where do I do split/merge of data - in the controller/service layer, or in the repository/data layer?
In the controller layer I'll do "var viewmodel = new ViewModel { MyData = ..., LegacyData = ... }; The problem - code cluttered with legacy issues.
In the data layer, I'll do "var model = repository.Get(id)" and model will contain data from both worlds, and when I do "repository.Save(entity)" it will update both data sources - in local db only project specific fields will be stored. The problems: a) possible leaky abstraction b) getting data from web service always while it is only need sometimes and usually for admin part only
a modification, use ICombinedRepository<Language> which will provide additional split/merge. Problems: still need either new model or IWithLegacy<Language, LegacyLanguage>...
Have a single "sync" method; this will remove legacy items not present in the project item list, update those that are present, create legacy items that are missed, etc...
Well, to summarize the main issues:
do I develop CRUD interface on web service or connect directly to its database (which is under my complete control, so that I may even later decide to move that web service part into the main app or make it use the main db)?
do I have separate classes for project's and legacy entities, thus managed separately, or have project's entities have all the legacy fields, managed transparently when saved/loaded?
Anyway, are there any useful tips on managing mostly duplicated data from different sources? What are the best practices?
In the non-admin part, I'd like to completely hide the notion of the legacy data. Which is what I do now, behind the repository interfaces. But for admin part it's not that clear or easy...
What you are describing here seems to warrant the need for an Anti-Corruption Layer. You can find solutions related to this topic here: DDD, Anti Corruption layer, how-to?
When you have two conceptual Bounded Contexts, but you're only using DDD for one of them, the Anti-Corruption layer comes into play. When reading from your data source (performing a get operation [R]), the anti-corruption layer will translate your legacy data into usable objects for your project. When writing to your data source (performing a set operation [CUD]), the anti-corruption layer will translate your DDD objects into objects understood by your legacy code.
Whether or not to use the existing Web Service depends on whether or not you're willing to change existing code. Sticking with DRY practices, you don't want to duplicate what you already have. If you want to keep the Web Service, you can add CUD methods inside the anti-corruption layer without impacting your legacy application.
In the anti-corruption layer, you will want to make use of adapters and facades to bring together separate classes for your DDD project and the legacy application.
The anti-corruption layer is exactly where you handle splitting and merging.
Let me know if you have any questions on this, as it can be a somewhat advanced topic. I'll try to answer as best I can.
Good luck!
I am curious what strategies folks have found for unit testing a data access class that does not involve loading (and presumably unloading) a real database for each test method? Are you using mock objects to represent the database connection? If so, are you required to pass the mock object into every method-under-test, and thus forcing the API to require a real db connection as a parameter to every method? Or, are you passing a mock object into the constructor at setup()?
I have a class that is implementing what I believe is a Data Mapper (or maybe gateway) pattern. It is the class responsible for encapsulating SQL and returning (or saving) "business objects". The rest of the code can interact with this mapper layer and the business objects, with total disregard for the persistence model. This code needs to have/maintain, or just know about, a live db connection in the real system. Emulating this under test is tricky.
The problem is how to unit test one of these mapper classes. The practice for creating a unit test under xUnit that I have seen most often is using the setup() method of the test to instantiate the SUT (system under test), usually your object that you're testing, and store it in a local variable in the test class. Then each of your test methods, interact with a unique instance of that SUT.
The assumption though is that whatever you're doing in the setup() method will presumably be replicated somewhere in your real code. So, you have to think about the setup process as "is this something I will want to repeatedly reproduce every time I need to use this object in the real world." If I am passing a db connection into the mapper's constructor in the setup that's fine, but doesn't that mean I'll have to pass a live db connection into the mapper object's constructor every time I want to really use one? Imagine that you'll have all kinds of places where you need to retrieve or store a business object and that to use a data mapper object, you need to pass in the db connection every time?
In my case, I am trying to establish tests for these data mapper objects that achieve the following:
Do not require the database connection object to be instantiated and passed into every method of the mapper class.
Do not require that the test case either connect to a real db or create a real, but "test", db on the fly for each test method.
I have basically seen two suggestions, pass the connection object as a parameter (which I have already addressed) or extend the SUT class just for the test and override whatever db connection setup process you have in the real world to use a mock system instead.
I am curious if anyone else is facing these issues, with any language, and what you have done to solve them? Maybe there is something obvious that I am missing?
In my experience, the responsibility for connection to a database is a sore point in data access. I solved this by letting the DAO take care of that based on the configuration file (app.config, etc). This way I don't need to worry about that when I write my tests. The DAL keeps one or more database connection profiles and connects/disconnects on every data access because in the end the connection pool will take care of physically connecting/disconnecting.
Another thing that helped me was using dbUnit to load baseline data before running the tests. I found it easier to go straight to the database instead of using mock objects. Also by connecting to a real database I can (to a certain point) test concurrency by issuing commands in different threads - mock objects wouldn't give me the real behavior.
You can use DbUnit to test SQL
It depends on what you're really trying to test. If you want to test that your SQL does what you expect, that's really heading into Integration Test territory. Assuming you're using Java, there are several pure-java RDBMS solutions (Apache Derby, HSQLDB, H2) you can use for that.
If on the other hand you're really just testing your Java <-> JDBC code (i.e. reading from ResultSets), then you can mock out pretty much all the relevant parts of JDBC since they're mostly interfaces. JMock is great for this. Simply add a setConnection() method to your Class Under Test, and pass in the mocked java.sql.Connection that will do your bidding. This works really well for keeping tests short and sweet.
Depending on how complex is your database setup, it might be a great option using an in memory store.
Normally I do my unit testing with a in-memory SQLite session. This is full blown database 100% in memory, no files, no config needed. Just one line.
Now this is not always an option. SQLite does not support all sql features of full blown server databases. Normally I use a layer trying to make my code database independent. In those cases I just switch to a in-memory database instance which I quickly create/destroy in memory during every setUp/tearDown.
Are you using any mid-layer to access your database? In most cases the greatest benefit of using that type of middleware is not database portability, but a simplified test harness.
I am writing a repository. Fetching objects is done through a DAO. Creating and updating objects is done through a Request object, which is given to a RequestHandler object (a la Command pattern). I didn't write the DAO, Request, or RequestHandler, so I can't modify them.
I'm trying to write a test for this repository. I have mocked out both the DAO and RequestHandler. My goal is to have the mocked RequestHandler simply add the new or updated object to the mocked DAO. This will create the illusion that I'm talking to the DB. This way, I don't have to mock the repository for all the classes that call this repository.
The problem is that the Request object is this gob of string blobs and various alphanumeric codes. I'm pretty sure XML is involved too. It's sort of a mess. Another developer is writing the code to create the Request object based on the objects being stored. And since RequestHandler takes in Requests and not the object I'm storing, it can't update the mocked DAO.
So the question is: do I mock the Request too, or should I wait until the other guy, who is kind of slow, to finish his code before I write the test? Or screw it and mock out the entire repository when testing the classes that call the repository?
BTW, I say "mock" not in the NMock sense, but rather like faking the DB with an in-memory collection.
To test the repository I would suggest that you use test doubles for all of the lower layer objects.
To test the classes that depend on the repository I would suggest that you use test doubles for the repository.
In both cases I mean test doubles created by some mocking library (fakes where that works for the test, stubs where you need to return something to the object under test and mocks if you really have to).
If you are creating an implementation of the DAO using in-memory collections to functionally replace the database in a demo or test system that is different to unit testing the upper layers. I have done something similar so that I can give prototypes to people and concentrate on business objects not the physical model. That isn't for unit testing though.
You may of may not be creating a web application, but you can have a look at the NerdDinner application which uses Repository. It is a free PDF that explains how to create an application using ASP.NET MVC and can be found here: Professional ASP.NET MVC 2.0