Unit Testing.... a data provider? - unit-testing

Given problem:
I like unit tests.
I develop connectivity software to external systems that pretty much and often use a C++ library
The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard.
How can I test this properly?
I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run).
I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency.
Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++
Anyone any idea? I am short on giving up using unit tests for this part of the application.

Unit testing is good, but it shouldn't be your only weapon against bugs. Look into the difference between unit tests and integration tests: it sounds to me like the latter is your best choice.
Also, automated tests (unit tests and integration tests) are only useful if your system's behavior isn't going to change. If you're breaking backward compatibility with every release, the automated tests of that functionality won't help you.
You may also want to see a previous discussion on how much unit testing is too much.

Does your external data source implement an interface -- or can you using a combination of an interface and a wrapper around the data source that implements the interface decouple your class under test from the data source. If either of these are true, then you can mock out the data source in your unit tests and provide the data from the mock instance.
public interface IDataSource
{
public List<DataObject> All();
...
}
public class DataWrapper : IDataSource
{
public DataWrapper( RealDataSource source )
{
this.Source = source;
}
public RealDataSource Source { get; set; }
public List<DataObject> All()
{
return this.Source.All();
}
}
Now in your class under test depend on the interface and inject an instance, then in your unit tests, provide a mock instance that implements the interface.
public void DataSourceAllTest()
{
var dataSource = MockRepository.GenerateMock<IDataSource>();
dataSource.Expect( s => s.All() ).Return( ... mock data ... );
var target = new ClassUnderTest( dataSource );
var actual = target.Foo();
// assert something about actual
dataSource.VerifyAllExpectations();
}

Related

How to choose TDD starting point in a real world project?

I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...

Unit Testing EJB 3.1

I am doing a small research on Unit Testing of EJB 3.1. At the end my goal is to produce a easy to use solution for Unit Testing EJB 3.1.
I do not have much knowledge with big EJB implementations and hence I would like to first get some experienced hands (You) to just pool in your ideas on what is difficult in Unit Testing EJBs.
With the initial research I have already done, I can understand the advantages of using mocking frameworks for Unit Testing rather than using embedded containers. Though both are good, mocking frameworks stands a little above when it comes to Unit Testing. The embedded containers are ofcourse very good and have their own advantages, but may be a different phase of unit testing. I still believe that there should be some shortfalls at least in some scenarios in using such frameworks which can be improved.
I hope I could make a complete solution for Unit Testing EJB which I can share in this forum once done.
Thanks for your support.
My advice to you would be to not fall into the common trap I see, which is to think you need to chose between mocking and using an embedded EJB container.
You can use both, you should use both, and where you find it difficult to use both you should demand better support and more features from your EJB container.
Certainly, you will find people at OpenEJB really supportive and more than happy to add features to support getting the best of both worlds. Nearly all the really good features have been created around the requests of users trying to do very specific things and finding it hard.
Standard EJBContainer API
package org.superbiz.stateless.basic;
import junit.framework.TestCase;
import javax.ejb.embeddable.EJBContainer;
public class CalculatorTest extends TestCase {
private CalculatorBean calculator;
/**
* Bootstrap the Embedded EJB Container
*
* #throws Exception
*/
protected void setUp() throws Exception {
EJBContainer ejbContainer = EJBContainer.createEJBContainer();
Object object = ejbContainer.getContext().lookup("java:global/simple-stateless/CalculatorBean");
assertTrue(object instanceof CalculatorBean);
calculator = (CalculatorBean) object;
}
Full source here
This scans the classpath and loads all beans.
No-scanning, easier mocking approach
Slightly different approach where you define everything in code. Obviously mocking is easier as you can supply mock implementations of beans where needed at will.
#RunWith(ApplicationComposer.class)
public class MoviesTest extends TestCase {
#EJB
private Movies movies;
#Resource
private UserTransaction userTransaction;
#PersistenceContext
private EntityManager entityManager;
#Module
public PersistenceUnit persistence() {
PersistenceUnit unit = new PersistenceUnit("movie-unit");
unit.setJtaDataSource("movieDatabase");
unit.setNonJtaDataSource("movieDatabaseUnmanaged");
unit.getClazz().add(Movie.class.getName());
unit.setProperty("openjpa.jdbc.SynchronizeMappings", "buildSchema(ForeignKeys=true)");
return unit;
}
#Module
public EjbJar beans() {
EjbJar ejbJar = new EjbJar("movie-beans");
ejbJar.addEnterpriseBean(new StatefulBean(MoviesImpl.class));
return ejbJar;
}
#Configuration
public Properties config() throws Exception {
Properties p = new Properties();
p.put("movieDatabase", "new://Resource?type=DataSource");
p.put("movieDatabase.JdbcDriver", "org.hsqldb.jdbcDriver");
p.put("movieDatabase.JdbcUrl", "jdbc:hsqldb:mem:moviedb");
return p;
}
#Test
public void test() throws Exception {
userTransaction.begin();
try {
entityManager.persist(new Movie("Quentin Tarantino", "Reservoir Dogs", 1992));
entityManager.persist(new Movie("Joel Coen", "Fargo", 1996));
entityManager.persist(new Movie("Joel Coen", "The Big Lebowski", 1998));
List<Movie> list = movies.getMovies();
assertEquals("List.size()", 3, list.size());
for (Movie movie : list) {
movies.deleteMovie(movie);
}
assertEquals("Movies.getMovies()", 0, movies.getMovies().size());
} finally {
userTransaction.commit();
}
}
}
Full source here
The end result
It's tempting to focus on the differences between different types of testing, etc. but certainly there's something to be said for a pragmatic middle. I personally don't see anything wrong with being able to mix "unit" and "integration" styles as fluently as possible.
Certainly, it's an admirable goal. Ideas and feature requests to get us closer are very welcome.
There are actually two different types of testing you might want to consider (not exclusive):
Unit Testing: Your EJBs are POJOs at the end of the day and therefore you can use your preferred unit testing framework (e.g. JUnit) and also a mocking framework like Mockito or EasyMock.
Integration testing: here you want to test the EJBs as if they were in the container (not in isolation) and therefore you have to somehow emulate that container. You can still use your unit testing framework to code your tests (e.g. JUnit), but now you are testing how these EJBs behave in the container and interact with other collaborators (e.g. other EJBs) they might have. For this I would recommend Arquillian
You can use Needle for unit tests of Java EE components.
Needle is a lightweight framework for testing Java EE components outside of the container in isolation. It reduces the test setup code by analysing dependencies and automatic injection of mock objects.
http://needle.spree.de

Application Service Layer: Unit Tests, Integration Tests, or Both?

I've got a bunch of methods in my application service layer that are doing things like this:
public void Execute(PlaceOrderOnHoldCommand command)
{
var order = _repository.Load(command.OrderId);
order.PlaceOnHold();
_repository.Save(order);
}
And at present, I have a bunch of unit tests like this:
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
var repository = new Mock<IOrderRepository>();
const int orderId = 1;
var order = new Mock<IOrder>();
repository.Setup(r => r.Load(orderId)).Returns(order.Object);
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository.Object);
service.Execute(command);
repository.Verify(r => r.Load(It.Is<int>(x => x == orderId)), Times.Exactly(1));
}
[Test]
public void PlaceOrderOnHold_CallsPlaceOnHold()
{
/* blah blah */
}
[Test]
public void PlaceOrderOnHold_SavesOrderToRepository()
{
/* blah blah */
}
It seems to be debatable whether these unit tests add value that's worth the effort. I'm quite sure that the application service layer should be integration tested, though.
Should the application service layer be tested to this level of granularity, or are integration tests sufficient?
I'd write a unit test despite there also being an integration test. However, I'd likely make the test much simpler by eliminating the mocking framework, writing my own simple mock, and then combining all those tests to check that the the order in the mock repository was on hold.
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
const int orderId = 1;
var repository = new MyMockRepository();
repository.save(new MyMockOrder(orderId));
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository);
service.Execute(command);
Assert.IsTrue(repository.getOrder(orderId).isOnHold());
}
There's really no need to check to be sure that load and/or save is called. Instead I'd just make sure that the only way that MyMockRepository will return the updated order is if load and save are called.
This kind of simplification is one of the reasons that I usually don't use mocking frameworks. It seems to me that you have much better control over your tests, and a much easier time writing them, if you write your own mocks.
Exactly: it's debatable! It's really good that you are weighing the expense/effort of writing and maintaining your test against the value it will bring you - and that's exactly the consideration you should make for every test you write. Often I see tests written for the sake of testing and thereby only adding ballast to the code base.
As a guideline I usually take that I want a full integration test of every important successful scenario/use case. Other tests I'll write are for parts of the code that are likely to break with future changes, or have broken in the past. And that is definitely not all code. That's where your judgement and insight in the system and requirements comes into play.
Assuming that you have an (integration) test for service.Execute(placeOrderOnHoldCommand), I'm not really sure if it adds value to test if the service loads an order from the repository exactly once. But it could be! For instance when your service previously had a nasty bug that would hit the repository ten times for a single order, causing performance issues (just making it up). In that case, I'd rename the test to PlaceOrderOnHold_LoadsOrderFromRepositoryExactlyOnce().
So for each and every test you have to decide for yourself ... hope that helps.
Notes:
The tests you show can be perfectly valid and look well written.
Your test sequence methods seems to be inspired on the way the Execute(...) method is currently implemented. When you structure your test this way, it could be that you are tying yourself to a specific implementation. This way, tests can actually make it harder to change - make sure you're only testing the important external behavior of your class.
I usually write a single integration test of the primary scenario. By primary scenario i mean the successful path of all the code being tested. Then I write unit tests of all the other scenarios like checking all the cases in a switch, testing exception and so forth.
I think it is important to have both and yes it is possible to test it all with integration tests only, but that makes your tests long running and harder to debug. In average I think I have 10 unit tests per integration test.
I don't bother to test methods with one-liners unless something bussines logic-like happens in that line.
Update: Just to make it clear, cause I'm doing test-driven development I always write the unit tests first and typically do the integration test at the end.

Can I unit test a method that makes Sitecore context calls?

I'm working on a web application that is built over Sitecore CMS. I was wondering if we could unit test for example a method that takes some data from Sitecore makes some processing with it and spits out a result. I would like to test all the logic within the method via a unit test.
I pretty confused after searching the internet wide and deep. Some say that this kind of testing is actually integration testing and not unit testing and I should test only the code that has no Sitecore calls, others say that this is not possible because the Sitecore context would be missing.
I would like to ask for your help experienced fellow programmers:
Can I unit test a method that contains Sitecore calls ? If YES, how ? If NO, why ? Is there any workaround ?
The project is at its beginning, so there will be no problem in choosing between unit testing frameworks such as MSTest or Nunit, if it is the case that the solution is related to the unit testing framework of choice.
It's pretty hard to find out anything about Sitecore without providing email and living through the sales pitch, so I'll just provide a generic approach on how to do something like this.
First and foremost, you assume that the Sitecore API is guaranteed to work - i.e. it's a framework - and you don't unit test it. You should be unit testing your interactions with it.
Then, download MOQ and read the quick start on how to use it. This is my preferred mocking framework. Feel free to use other frameworks if you wish.
Hopefully, Sitecore API provides a way for you to create data objects without dealing with persistence - i.e. to simply create a new instance of whatever it is you are interested in. Here is my imaginary API:
public class Post {
public string Body {get;set;}
public DateTime LastModified {get;set;}
public string Title {get;set;}
}
public interface ISiteCorePosts {
public IEnumerable<Post> GetPostsByUser(int userId);
}
In this case unit testing should be fairly easy. With a bit of Dependency Injection, you can inject the SiteCore interfaces into your component and then unit test it.
public class MyPostProcessor {
private readonly ISiteCorePosts m_postRepository;
public MyPostProcessor(ISiteCorePosts postRepository) {
m_postRepository = postRepository;
}
public void ProcessPosts(int userId) {
var posts = m_postRepository.GetPostsByUser(userId);
//do something with posts
}
}
public class MyPostProcessorTest {
[TestMethod]
ProcessPostsShouldCallGetPostsByUser() {
var siteCorePostsMock = new Mock<ISiteCorePosts>();
//Sets up the mock to return a list of posts when called with userId = 5
siteCorePostsMock.Setup(m=>m.GetPostsByUser(5)).Returns(new List<Post>{/*fake posts*/});
MyPostProcessor target = new MyPostProcessor(siteCorePostsMock.Object);
target.ProcessPosts(5);
//Verifies that all setups are called
siteCorePostsMock.VerifyAll();
}
}
If ISiteCorePosts is not, in fact, an interface and is a concrete class whose methods are not virtual and thus cannot be mocked, you will need to use Facade pattern to wrap the SiteCore interaction to make it more testing friendly.
public class SiteCorePostsFacade {
SiteCorePosts m_Posts = new SiteCorePosts();
//important - make this method virtual so it can be mocked without needing an interface
public virtual IEnumerable<Post> GetPostsByUser(int userId) {
return m_Posts.GetPostsByUser(userId);
}
}
You then proceed to use SiteCorePostsFacade as though it was an interface in the previous example. Good thing about MOQ is that it allows you to mock concrete classes with virtual methods, not just interfaces.
With this approach, you should be able to inject all sorts of data into your application to test all interactions with SiteCore API.
we have used a custom WebControl placed on a WebForm for our integration tests some years now, which wraps the NUnit Test Suite runner functionality much like the NUnit GUI. It show a pretty grid of executed tests with links to fixtures and categories to execute specific tests. Its created much like described here http://adeneys.wordpress.com/2010/04/13/new-technique-for-unit-testing-renderings-in-sitecore/ (the custom test runner part). Our implementation can also return raw NUnit xml for further processing by for example a build server.
I've tried MSTest a while back and it also works when specified that it should launch a WebDev / IIS site to test. It works but is extremely slow compared to above solution.
Happy testing!
Short answer:
You need to mock calls to SiteCore CMS.
Long answer:
I am not aware about SiteCore CMS. But, from your question looks like it is something that is external to your application. Components external to your system should always be used via interface. This has two benefits:
If you want to use another CMS system, you can easily do as your application is just talking to an interface.
It helps you with behavior testing by mocking the interface.
The code you write is your responsibility and hence you should only unit test that piece of code. Your unit tests should ensure that your code calls appropriate SiteCode CMS methods in various scenarios (behavior tests). You can do this using mocking. I use moq for mocking.
As tugga said, it depends upon how tightly the code you want to test is coupled to SiteCore. If it's something like:
SomeSiteCoreService siteCoreDependency = new SomeSiteCoreService()
Then this would be very difficult to test. If SiteCore provides you an interface, then you have more flexibility to unit test it. You could pass the implementation into your method either (contstructor, class property, or method parameter) and then you can send in a fake implementation of that service.
If they do not provide you with an interface, then you have to do a little more work. You would write an adapter interface of your own and the default implementation would delegate to the 3rd party dependency.
public interface ICMSAdapter{
void DoSomethingWithCMS()
}
public class SiteCoreCMSAdapter: ICMSAdapter{
SiteCoreService _cms = new SiteCoreService();
public void DoSomethingWithCMS(){
_cms.DoSomething();
}
That keeps your 3rd party dependencies at arms length and provides seams to all sorts of cool things, like unit tests and you do interception style architecture and do your own thing before and after the call.
}
I was able to get unit tests to interact with sitecore api in VS 2015. The same test throws a StackOverflow exception when run in VS 2012.
For example, this method call runs fine in VS2015 but not VS2015:
Context.SetActiveSite("mysite");
quick note: this assumes you have a site named mysite setup in your config file

MEF and unit testing with NUnit

A few weeks ago I jumped on the MEF (ComponentModel) bandwagon, and am now using it for a lot of my plugins and also shared libraries. Overall, it's been great aside from the frequent mistakes on my part, which result in frustrating debugging sessions.
Anyhow, my app has been running great, but my MEF-related code changes have caused my automated builds to fail. Most of my unit tests were failing simply because the modules I was testing were dependent upon other modules that needed to be loaded by MEF. I worked around these situations by bypassing MEF and directly instantiating those objects.
In other words, via MEF I would have something like
[Import]
public ICandyInterface ci { get; set; }
and
[Export(typeof(ICandyInterface))]
public class MyCandy : ICandyInterface
{
[ImportingConstructor]
public MyCandy( [Import("name_param")] string name) {}
...
}
But in my unit tests, I would just use
CandyInterface MyCandy = new CandyInterface( "Godiva");
In addition, the CandyInterface requires a connection to a database, which I have worked around by just adding a test database to my unit test folder, and I have NUnit use that for all of the tests.
Ok, so here are my questions regarding this situation:
Is this a Bad Way to do things?
Would you recommend composing parts in [SetUp]
I haven't yet learned how to use mocks in unit testing -- is this a good example of a case where I might want to mock the underlying database connection (somehow) to just return dummy data and not really require a database?
If you've encountered something like this before, can you offer your experience and the way you solved your problem? (or should this go into the community wiki?)
It sounds like you are on the right track. A unit test should test a unit, and that's what you do when you directly create instances. If you let MEF compose instances for you, they would tend towards integration tests. Not that there's anything wrong with integration tests, but unit tests tend to be more maintainable because you test each unit in isolation.
You don't need a container to wire up instances in unit tests.
I generally recommend against composing Fixtures in SetUp, as it leads to the General Fixture anti-pattern.
It is best practice to replace dependencies with Test Doubles. Dynamic mocks is one of the more versatile ways of doing this, so definitely something you should learn.
I agree that creating the DOCs manually is much better than using MEF composition container to satisfy imports, but regarding the note 'compositing fixtures in setup leads to the general fixture anti pattern' - I want to mention that that's not always the case.
If you’re using the static container and satisfy imports via CompositionInitializer.SatisfyImports you will have to face the general fixture anti pattern as CompositionInitializer.Initialize cannot be called more than once. However, you can always create CompositionContainer, add catalogs, and call SatisyImportOnce on the container itself. In that case you can use a new CompositionContainer in every test and get away with facing the shared/general fixture anti pattern
I blogged on how to do unit tests (not nunit but works just the same) with MEF.
The trick was to use a MockExportProvider and i created a test base for all my tests to inherit from.
This is my main AutoWire function that works for integration and unit tests:
protected void AutoWire(MockExportProvider mocksProvider, params Assembly[] assemblies){
CompositionContainer container = null;
var assCatalogs = new List<AssemblyCatalog>();
foreach(var a in assemblies)
{
assCatalogs.Add(new AssemblyCatalog(a));
}
if (mocksProvider != null)
{
var providers = new List<ExportProvider>();
providers.Add(mocksProvider); //need to use the mocks provider before the assembly ones
foreach (var ac in assCatalogs)
{
var assemblyProvider = new CatalogExportProvider(ac);
providers.Add(assemblyProvider);
}
container = new CompositionContainer(providers.ToArray());
foreach (var p in providers) //must set the source provider for CatalogExportProvider back to the container (kinda stupid but apparently no way around this)
{
if (p is CatalogExportProvider)
{
((CatalogExportProvider)p).SourceProvider = container;
}
}
}
else
{
container = new CompositionContainer(new AggregateCatalog(assCatalogs));
}
container.ComposeParts(this);
}
More info on my post: https://yoavniran.wordpress.com/2012/10/18/unit-testing-wcf-and-mef/