How do I Unit Test the Output of a View in MonoRail? - unit-testing

I've been trying to write some initial NUnit unit tests for MonoRail, having got some basics working already. However, while I've managed to check whether a Flash["message"] value has been set by a controller action, the BaseControllerTest class doesn't seem to store the output for a view at all, so whether I call RenderView or the action itself, nothing gets added to the Response.OutputContent data.
I've also tried calling InPlaceRenderView to try to get it to write to a StringWriter, and the StringWriter also seems to get nothing back - the StringBuilder that returns is also empty.
I'm creating a new controller instance, then calling
PrepareController(controller,"","home","index");
So far it just seems like the BaseControllerTest is causing any output to get abandoned. Am I missing something? Should this work? I'm not 100% sure, because while I'm also running these unit tests in MonoDevelop on Linux, although MonoRails is working OK there.

While I haven't got an ideal method for testing Views, this is possibly less important when ViewComponents can be tested adequately. To test views within the site itself, I can use Selenium. While in theory that can be made part of an NUnit test suite, that didn't run successfully under MonoDevelop in my tests (failing to start the connection to Selenium RC consistently, despite the RC interactive session working fine). However, the Selenium tests can be run as a set from Firefox, which is not too bad - unit testing with NUnit, then Integration/System testing scripting using a Selenium suite, and that setup will work in a Linux/MonoDevelop setup.
As for testing the underlying elements, you can check for redirections and check the flash value set or the like, so that's all fine, and for testing ViewComponents the part-mocked rendering does return the rendered output in an accessible form, so they've proved much easier to test in NUnit (with a base test class of BaseViewComponentTest) as follows:
[Test]
public void TestMenuComponentRendersOK()
{
var mc = new MenuComponent();
PrepareViewComponent(mc);
var dict = new System.Collections.Specialized.ListDictionary();
dict.Add("data",getSampleMenuData());
dict.Add("Name","testmenu");
// other additional parameters
mc.RenderComponent(mc,dict);
Assert.IsTrue(this.Output.Contains(""),"List items should have been added");
}

Related

Functional tests are superset of unit tests, are they?

I have been reading about unit tests and functional tests for a while now.
If I write exhaustive functional tests, won't they in-turn cover the units underneath as well, which in-turn make the unit tests redundant?
We follow agile and we write the functional tests using WebDriver as soon as we are done with the 'slice' of the functionality, which is typically 2-4 weeks of sprint time.
Perhaps, but not necessarily.
Functional tests can be thought of as "Black Box" tests whereby you are looking at a specific function (regardless if that's a single method, module, system, etc) and checking that for a given input, you get a given output.
However, if the functional test fails all you can say is that the system is faulty; it doesn't necessarily give you any indication of what part of the system is to blame. Of course you will go off and diagnose the problem but you don't know up front what the problem is.
e.g
//assuming you have a UserService that amongst other things passes through to a UserRepository
var repo = new UserRepository();
var sut = new UserService(repo);
var user = sut.GetUserByID(1);
Assert.IsNotNull(user); //Suppose this fails.
In the above case, you don't know if it's because the UserService's GetUserByID() function is faulty - perhaps it did not call repo.GetUserByID and just returned null, perhaps it did get a User from repo but accidentally then returned an uninitialized temp variable, etc - or perhaps it's because the dependency (repo) itself is faulty. In any case, you'll have to debug into the issue.
Unit tests, on the other hand, are more like "White Box" tests whereby you have effectively taken the cover off the system and are testing how the system behaves in order to do what you want it to do.
e.g. (code below may not compile and is just for demo purposes)
//setup a fake repo and fake the GetUserByID method to return a known instance:
var repo = new Mock<UserRepository>();
var user = new User{ID=1;}
repo.Setup(r=>r.GetUserByID(1).Returns(user);
var sut = new UserService(repo.Object);
var actual= sut.GetUserByID(1);
//now make sure we got back what we expected. If we didn't then UserService has a problem.
Assert.IsNotNull(user);
Assert.AreEqual(user,actual);
In this case you are explicitly verifying that sut.GetUserByID() behaves in the expected way - that it invokes the repo.GetUserByID() and returns resultant object, without mangling it or changing it.
If this unit test passes, but the functional test fails, you can say with confidence that the problem does not lie with the UserService, but with the UserRepository class, without even looking at the code. That may or may not save you time, it really depends on how complex your functional units are.

Ordered Selenium unit tests n

I have a small problem, I have created some Selenium tests. The problem is I can't order the testcases I have created. I know unit testing should not be ordered but this is what I need in my situation. I have to follow these steps: login first, create a new customer, change some details about the customer and finally log out.
Since there is no option to order unit tests in NUnit I can't execute this.
I already tried another option, to create a unittest project in Visual Studio, because Visual Studio 2012 has the ability to create a ordered unit test. But this is not working because I can't run a unit test while I am running my ASP.NET project. Another solution file is also not a good option because I want to verify my data after it has been submitted by a Selenium test.
Does someone of you have another solution to solve my problem?
If you want to test all of those steps in a specific order (and by the sounds of it, as a single session) then really it's more like an acceptance test you are talking about; and in that case it's not a sin to write more complex test methods and Assert your conditions after each step.
If you want to test each step in true isolation (a pure unit test) then each unit test must be capable of running by itself without any reference to any other tests; but when you're testing the actual site UI itself this isn't really an option for you.
Of course if you really you want to have every single test somehow setup every single dependency without reference to any other actions (e.g in the last test you would need to fake the login token, your data layer will have to pretend that you added a new customer, etc. A lot of work for dubious benefit...)
I say this based on the assumption that you already have unit tests written for the server-side controllers, layers, models, etc, that you run without any reference to the actual site running in a browser and are therefore confident that the various back-end part of your site do what they are supposed to do
In your case I'd recommend more of a hybrid integration/acceptance test
void Login(IWebDriver driver)
{
//use driver to open browser, navigate to login page, type user/password into box and press enter
}
void CreateNewCustomer(IWebDriver driver)
{
Login(driver);
//and then use driver to click "Create Customer" link, etc, etc
}
void EditNewlyCreatedCustomer(IWebDriver driver)
{
Login(driver);
CreateNewCustomer(driver);
//do your selenium stuff..
}
and then your test methods:
[Test]
void Login_DoesWhatIExpect()
{
var driver = new InternetExplorerDriver("your Login URL here");
Login(driver);
Assert(Something);
}
[Test]
void CreateNewCustomer_WorksProperly()
{
var driver = new InternetExplorerDriver("your Login URL here");
CreateNewCustomer(driver);
Assert(Something);
}
[Test]
void EditNewlyCreatedCustomer_DoesntExplodeTheServer()
{
var driver = new InternetExplorerDriver("your Login URL here");
EditNewlyCreatedCustomer(driver);
Assert(Something);
}
In this way the order of the specific tests do not matter; certainly if the Login test fails then the CreateNewCustomer and EditNewlyCreatedCustomer tests will also fail but that's actually irrelevant in this case as you are testing an entire "thread" of operation

Grails. mocked data from a unit test is available in an integration test

Im having a failed integration test because of test pollution (tests pass or fail depending on which order they are run in).
What baffles me a bit however is that it seems that a unit test where i have mocked some data with mockDomain(Media.class,[new Movie(...)]) is still present and available in other tests, even integration tests.
Is this the expected behaviour? why doesn't the test framework clean up after itself for each test?
EDIT
Really strange, the documentation states that:
Integration tests differ from unit tests in that you have full access to the Grails environment within the test. Grails will use an in-memory HSQLDB database for integration tests and clear out all the data from the database in between each test.
However in my integration test i have the following code
protected void setUp() {
super.setUp()
assertEquals("TEST POLLUTION!",0,Movie.count())
...
}
Which gives me the output:
TEST POLLUTION! expected:<0> but was:<1>
Meaning that there is data present when there shouldn't be!
Looking at the data that is present int he Movie.list() i find that the data corresponds to data set in a previous test (unit test)
protected void setUp() {
super.setUp()
//mock the superclass and subclasses as instances
mockDomain(Media.class,[
new Movie(id:1,name:'testMovie')
])
...
}
Any idea's of why im experiencing these issues?
It's also possible that the pollution is in the test database. Check the DataSources.groovy to see what is being used for the test environment. If you have it set to use a database where
dbCreate is set to something other than "create-drop", any previous contents of the database could also be showing up.
If this is the case, the pollution has come from an entirely difference source. Instead of coming from the unit tests, it has actually come from the database, but when switching to run the integration tests you get connected to a real database with all the data it contains.
We experienced this problem, in that our test enviroment was set to have dbCreate as "update". Quite why this was set for integration tests puzzled me, so I switched to use dbCreate as "create-drop" and make sure that when running test suites we started with a clean database.

How to use "Pex and Moles" library with Entity Framework?

This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}