How do I use an API of a windows service? - unit-testing

I've got a big windows service application. It performs actions on a time bound basis. Sometimes I need to be able to use some of it's functionality in isolation from the rest of the application. Currently I've got a battery of 'unit tests' which call into various sources and perform the desired functionality. My problem is these are not unit tests, they are the way we're exposing the API. If we run all the unit tests in the project, we'll be damaging some of our production data.
My question is how do I go about accessing some of the functionality of the application without unit testing? I was thinking of perhaps something like an interpreter over the top of it where you can call various parts of the functionality, but am not really that sure where to start.
An example of a unit test in our code will be:
[TestMethod]
public void TransferFunds()
{
int accountNumberTo = 123456;
int accountNumberFrom = 654321;
var accountFrom = Store.GetAccount(accountNumberFrom);
var accountTo = Store.GetAccount(accountNumberTo);
double amountToTransfer = 1000;
DateTime transactionDate = new DateTime(2010,01,01);
Store.TransferFunds(accountFrom, AccountTo, amountToTransfer, transactionDate);
var client = BankAccountService.Client();
client.Contribute(accountNumberTo, amountToTransfer, transactionDate);
client.Contribute(accountNumberFrom, amountToTransfer, transactionDate);
}
How can we move this out of unit tests, but still have the ability to run code like this?

Your setup sounds very dangerous. I would create separate console applications for your different needs. I would also remove recommend that you remove all unittests that endangers your production data. Having that sort of unittests is just down-right bad!

Related

Ordered Selenium unit tests n

I have a small problem, I have created some Selenium tests. The problem is I can't order the testcases I have created. I know unit testing should not be ordered but this is what I need in my situation. I have to follow these steps: login first, create a new customer, change some details about the customer and finally log out.
Since there is no option to order unit tests in NUnit I can't execute this.
I already tried another option, to create a unittest project in Visual Studio, because Visual Studio 2012 has the ability to create a ordered unit test. But this is not working because I can't run a unit test while I am running my ASP.NET project. Another solution file is also not a good option because I want to verify my data after it has been submitted by a Selenium test.
Does someone of you have another solution to solve my problem?
If you want to test all of those steps in a specific order (and by the sounds of it, as a single session) then really it's more like an acceptance test you are talking about; and in that case it's not a sin to write more complex test methods and Assert your conditions after each step.
If you want to test each step in true isolation (a pure unit test) then each unit test must be capable of running by itself without any reference to any other tests; but when you're testing the actual site UI itself this isn't really an option for you.
Of course if you really you want to have every single test somehow setup every single dependency without reference to any other actions (e.g in the last test you would need to fake the login token, your data layer will have to pretend that you added a new customer, etc. A lot of work for dubious benefit...)
I say this based on the assumption that you already have unit tests written for the server-side controllers, layers, models, etc, that you run without any reference to the actual site running in a browser and are therefore confident that the various back-end part of your site do what they are supposed to do
In your case I'd recommend more of a hybrid integration/acceptance test
void Login(IWebDriver driver)
{
//use driver to open browser, navigate to login page, type user/password into box and press enter
}
void CreateNewCustomer(IWebDriver driver)
{
Login(driver);
//and then use driver to click "Create Customer" link, etc, etc
}
void EditNewlyCreatedCustomer(IWebDriver driver)
{
Login(driver);
CreateNewCustomer(driver);
//do your selenium stuff..
}
and then your test methods:
[Test]
void Login_DoesWhatIExpect()
{
var driver = new InternetExplorerDriver("your Login URL here");
Login(driver);
Assert(Something);
}
[Test]
void CreateNewCustomer_WorksProperly()
{
var driver = new InternetExplorerDriver("your Login URL here");
CreateNewCustomer(driver);
Assert(Something);
}
[Test]
void EditNewlyCreatedCustomer_DoesntExplodeTheServer()
{
var driver = new InternetExplorerDriver("your Login URL here");
EditNewlyCreatedCustomer(driver);
Assert(Something);
}
In this way the order of the specific tests do not matter; certainly if the Login test fails then the CreateNewCustomer and EditNewlyCreatedCustomer tests will also fail but that's actually irrelevant in this case as you are testing an entire "thread" of operation

Application Service Layer: Unit Tests, Integration Tests, or Both?

I've got a bunch of methods in my application service layer that are doing things like this:
public void Execute(PlaceOrderOnHoldCommand command)
{
var order = _repository.Load(command.OrderId);
order.PlaceOnHold();
_repository.Save(order);
}
And at present, I have a bunch of unit tests like this:
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
var repository = new Mock<IOrderRepository>();
const int orderId = 1;
var order = new Mock<IOrder>();
repository.Setup(r => r.Load(orderId)).Returns(order.Object);
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository.Object);
service.Execute(command);
repository.Verify(r => r.Load(It.Is<int>(x => x == orderId)), Times.Exactly(1));
}
[Test]
public void PlaceOrderOnHold_CallsPlaceOnHold()
{
/* blah blah */
}
[Test]
public void PlaceOrderOnHold_SavesOrderToRepository()
{
/* blah blah */
}
It seems to be debatable whether these unit tests add value that's worth the effort. I'm quite sure that the application service layer should be integration tested, though.
Should the application service layer be tested to this level of granularity, or are integration tests sufficient?
I'd write a unit test despite there also being an integration test. However, I'd likely make the test much simpler by eliminating the mocking framework, writing my own simple mock, and then combining all those tests to check that the the order in the mock repository was on hold.
[Test]
public void PlaceOrderOnHold_LoadsOrderFromRepository()
{
const int orderId = 1;
var repository = new MyMockRepository();
repository.save(new MyMockOrder(orderId));
var command = new PlaceOrderOnHoldCommand(orderId);
var service = new OrderService(repository);
service.Execute(command);
Assert.IsTrue(repository.getOrder(orderId).isOnHold());
}
There's really no need to check to be sure that load and/or save is called. Instead I'd just make sure that the only way that MyMockRepository will return the updated order is if load and save are called.
This kind of simplification is one of the reasons that I usually don't use mocking frameworks. It seems to me that you have much better control over your tests, and a much easier time writing them, if you write your own mocks.
Exactly: it's debatable! It's really good that you are weighing the expense/effort of writing and maintaining your test against the value it will bring you - and that's exactly the consideration you should make for every test you write. Often I see tests written for the sake of testing and thereby only adding ballast to the code base.
As a guideline I usually take that I want a full integration test of every important successful scenario/use case. Other tests I'll write are for parts of the code that are likely to break with future changes, or have broken in the past. And that is definitely not all code. That's where your judgement and insight in the system and requirements comes into play.
Assuming that you have an (integration) test for service.Execute(placeOrderOnHoldCommand), I'm not really sure if it adds value to test if the service loads an order from the repository exactly once. But it could be! For instance when your service previously had a nasty bug that would hit the repository ten times for a single order, causing performance issues (just making it up). In that case, I'd rename the test to PlaceOrderOnHold_LoadsOrderFromRepositoryExactlyOnce().
So for each and every test you have to decide for yourself ... hope that helps.
Notes:
The tests you show can be perfectly valid and look well written.
Your test sequence methods seems to be inspired on the way the Execute(...) method is currently implemented. When you structure your test this way, it could be that you are tying yourself to a specific implementation. This way, tests can actually make it harder to change - make sure you're only testing the important external behavior of your class.
I usually write a single integration test of the primary scenario. By primary scenario i mean the successful path of all the code being tested. Then I write unit tests of all the other scenarios like checking all the cases in a switch, testing exception and so forth.
I think it is important to have both and yes it is possible to test it all with integration tests only, but that makes your tests long running and harder to debug. In average I think I have 10 unit tests per integration test.
I don't bother to test methods with one-liners unless something bussines logic-like happens in that line.
Update: Just to make it clear, cause I'm doing test-driven development I always write the unit tests first and typically do the integration test at the end.

Unit Testing.... a data provider?

Given problem:
I like unit tests.
I develop connectivity software to external systems that pretty much and often use a C++ library
The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard.
How can I test this properly?
I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run).
I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency.
Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++
Anyone any idea? I am short on giving up using unit tests for this part of the application.
Unit testing is good, but it shouldn't be your only weapon against bugs. Look into the difference between unit tests and integration tests: it sounds to me like the latter is your best choice.
Also, automated tests (unit tests and integration tests) are only useful if your system's behavior isn't going to change. If you're breaking backward compatibility with every release, the automated tests of that functionality won't help you.
You may also want to see a previous discussion on how much unit testing is too much.
Does your external data source implement an interface -- or can you using a combination of an interface and a wrapper around the data source that implements the interface decouple your class under test from the data source. If either of these are true, then you can mock out the data source in your unit tests and provide the data from the mock instance.
public interface IDataSource
{
public List<DataObject> All();
...
}
public class DataWrapper : IDataSource
{
public DataWrapper( RealDataSource source )
{
this.Source = source;
}
public RealDataSource Source { get; set; }
public List<DataObject> All()
{
return this.Source.All();
}
}
Now in your class under test depend on the interface and inject an instance, then in your unit tests, provide a mock instance that implements the interface.
public void DataSourceAllTest()
{
var dataSource = MockRepository.GenerateMock<IDataSource>();
dataSource.Expect( s => s.All() ).Return( ... mock data ... );
var target = new ClassUnderTest( dataSource );
var actual = target.Foo();
// assert something about actual
dataSource.VerifyAllExpectations();
}

Unit testing Code which use API

I have this simple method which calls the TFS (Team foundation server) API to get WorkItemCollection object. I have just converted in to an entity class and also added it in cache. As you can see this is very simple.
How should i unit test this method. Only the important bit it does is calls TFS API. Is it worth testing such methods? If yes then how should we test it?
One way I can think is I can mock call to Query.QueryWorkItemStore(query) and return an object of type “WorkItemCollection” and see finally this method converts “WorkItemCollection” to List. And check if it was added to cache or not.
Also as I am using dependency injection pattern her so I am injecting dependency for
cache
Query
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
public virtual List<Sprint> Sprint(string query)
{
List<Sprint> sprints =
Cache.Get<List<Sprint>>(query);
if (sprints == null)
{
WorkItemCollection items =
Query.QueryWorkItemStore(query);
sprints = new List<Sprint>();
foreach (WorkItem i in items)
{
Sprint sprint = new Sprint
{
ID = i.Id,
IterationPath = i.IterationPath,
AreaPath = i.AreaPath,
Title = i.Title,
State = i.State,
Goal = i.Description,
};
sprints.Add(sprint);
}
Cache.Add(sprints, query,
this.CacheExpiryInterval);
}
return sprints;
}
Should I only pass dependency of mocked type (Using MOQ) or I should pass proper class type.
In your unit tests, you should pass a mock. There are several reasons:
A mock is transparent: it allows you to check that the code under test did the right thing with the mock.
A mock gives you full control, allowing you to test scenarios that are difficult or impossible to create with the real server (e.g. throw IOException)
A mock is predictable. A real server is not - it may not even be available when you run your tests.
Things you do on a mock don't influence the outside world. You don't want to change data or crash the server by running your tests.
A test with mocks is faster. No connection to the server or real database queries have to be made.
That being said, automated integration tests which include a real server are also very useful. You just have to keep in mind that they will have lower code coverage, will be more fragile, and will be more expensive to create/run/maintain. Keep your unit tests and your integration tests separate.
edit: some collaborator objects like your Cache object may also be very unit-test friendly. If they have the same advantages as that of a mock that I list above, then you don't need to create a mock. For example, you typically don't need to mock a collection.

MEF and unit testing with NUnit

A few weeks ago I jumped on the MEF (ComponentModel) bandwagon, and am now using it for a lot of my plugins and also shared libraries. Overall, it's been great aside from the frequent mistakes on my part, which result in frustrating debugging sessions.
Anyhow, my app has been running great, but my MEF-related code changes have caused my automated builds to fail. Most of my unit tests were failing simply because the modules I was testing were dependent upon other modules that needed to be loaded by MEF. I worked around these situations by bypassing MEF and directly instantiating those objects.
In other words, via MEF I would have something like
[Import]
public ICandyInterface ci { get; set; }
and
[Export(typeof(ICandyInterface))]
public class MyCandy : ICandyInterface
{
[ImportingConstructor]
public MyCandy( [Import("name_param")] string name) {}
...
}
But in my unit tests, I would just use
CandyInterface MyCandy = new CandyInterface( "Godiva");
In addition, the CandyInterface requires a connection to a database, which I have worked around by just adding a test database to my unit test folder, and I have NUnit use that for all of the tests.
Ok, so here are my questions regarding this situation:
Is this a Bad Way to do things?
Would you recommend composing parts in [SetUp]
I haven't yet learned how to use mocks in unit testing -- is this a good example of a case where I might want to mock the underlying database connection (somehow) to just return dummy data and not really require a database?
If you've encountered something like this before, can you offer your experience and the way you solved your problem? (or should this go into the community wiki?)
It sounds like you are on the right track. A unit test should test a unit, and that's what you do when you directly create instances. If you let MEF compose instances for you, they would tend towards integration tests. Not that there's anything wrong with integration tests, but unit tests tend to be more maintainable because you test each unit in isolation.
You don't need a container to wire up instances in unit tests.
I generally recommend against composing Fixtures in SetUp, as it leads to the General Fixture anti-pattern.
It is best practice to replace dependencies with Test Doubles. Dynamic mocks is one of the more versatile ways of doing this, so definitely something you should learn.
I agree that creating the DOCs manually is much better than using MEF composition container to satisfy imports, but regarding the note 'compositing fixtures in setup leads to the general fixture anti pattern' - I want to mention that that's not always the case.
If you’re using the static container and satisfy imports via CompositionInitializer.SatisfyImports you will have to face the general fixture anti pattern as CompositionInitializer.Initialize cannot be called more than once. However, you can always create CompositionContainer, add catalogs, and call SatisyImportOnce on the container itself. In that case you can use a new CompositionContainer in every test and get away with facing the shared/general fixture anti pattern
I blogged on how to do unit tests (not nunit but works just the same) with MEF.
The trick was to use a MockExportProvider and i created a test base for all my tests to inherit from.
This is my main AutoWire function that works for integration and unit tests:
protected void AutoWire(MockExportProvider mocksProvider, params Assembly[] assemblies){
CompositionContainer container = null;
var assCatalogs = new List<AssemblyCatalog>();
foreach(var a in assemblies)
{
assCatalogs.Add(new AssemblyCatalog(a));
}
if (mocksProvider != null)
{
var providers = new List<ExportProvider>();
providers.Add(mocksProvider); //need to use the mocks provider before the assembly ones
foreach (var ac in assCatalogs)
{
var assemblyProvider = new CatalogExportProvider(ac);
providers.Add(assemblyProvider);
}
container = new CompositionContainer(providers.ToArray());
foreach (var p in providers) //must set the source provider for CatalogExportProvider back to the container (kinda stupid but apparently no way around this)
{
if (p is CatalogExportProvider)
{
((CatalogExportProvider)p).SourceProvider = container;
}
}
}
else
{
container = new CompositionContainer(new AggregateCatalog(assCatalogs));
}
container.ComposeParts(this);
}
More info on my post: https://yoavniran.wordpress.com/2012/10/18/unit-testing-wcf-and-mef/