Test driving Nancy Modules - unit-testing

Ok - I love NancyFx. Writing a web application with that few lines is just amazing!
But how do you test drive your NancyModules on the unit level?
Please note that I am aware of the excellent testframework supplied with Nancy (Nancy.Testing on NuGet), which gives excellent ways to test the whole (almost) application stack. But now I mean the unit level test I use to flesh out the contents of my NancyModule, in TDD fashion.
Since the routes are defined in the constructor, often together with a lamda expression that constitute the whole action, it feels a bit "unreachable" from a unit test. But have I missed something obvious on how to test the actions of the route?
For example, how would a unit test for this simple application look?
public class ResouceModule : NancyModule
{
private IProductRepository _productRepo;
public ResourceModule(IProductRepository repo) : base("/products")
{
Get["/list"] = parameters => {
return View["productList.cshtml", repo.GetAllProducts()];
};
}
}
See there - now I wrote the production code before the test... :) Any suggestions on how to start with the test?

You can do test first dev with the testing tools we provide:
In your test startup configure a bootstrapper that only contains the module you have under test and any any fake objects you want.
In your test execute a specific route (like GET /list) - you might want a small helper for this to remove some repeated code perhaps.
Assert on what comes back - you have full access to the request and response objects (for headers, cookies etc), along with helpers for HTML bodies and, coming in 1.8, helpers for handing JSON, XML and just string responses in the body.
Move onto the next route, rinse and repeat.
Ok, so you're not just testing the module, but if you look at the call stack, there's not much going on before or after you hit your route so it's not that big of a deal in my book :-) If you really do want to test the module in complete isolation then you can just construct it yourself and poke the individual routes accordingly (they're just dictionaries in the module).

As part of Nancy.Testing you can use the configurable bootsrapper to control the setup, including the IoC setup. That should enable testing the module without lower level dependencies, and enable TDD.

Related

Unit testing an over-large front controller -based class: optimal strategy

I have a web application which currently exists as a single large class containing a front controller, and a bootstrap file which runs the class and passes in the settings.
Over time, the class has become over-large, with multiple concerns, that I have long wanted to refactor into what will be rather obvious subclasses. I'm very familiar with all kinds of refactorings.
However, for this application, at present there is no test coverage. Because all interactions are done by reading GET and POST parameters, there is only one public method into the system, i.e. such URL interactions through the front controller entry point. So the class is hard to unit test (I will be using PHPUnit) as it stands.
Obviously it is far safer to refactor with tests already in place. So I would welcome views on which strategy is best:
1) Create a pile of tests that implement GET and POST interactions as per a user using the web application; or
2) Create a pile of tests against the private functions by using the ReflectionClass workaround, and convert these to standard PHPUnit tests concurrently with the refactoring; or
3) Add unit testing after doing the subclass refactoring, testing the public entry API points to the new subclasses.
Either way how you approach it, I would do it one-functionality at a time if you can, it makes the task less daunting and therefore more achievable...
When confronted with such situations in the past (quite a few times in fact), I usually do this:
First of all, does the application use external dependencies like databases? The first thing is that we need to make these predictable. I usually try to find where the creation of these external dependencies occur and then delegate it to a good Dependency Injection framework (like Pimple).
Once that is done you can add rules to the DI factory methods that return test databases depending on some conditions set: I usually use a test domain for this, so if the site is foo.local then i also have a domain test.foo.local that sets (in the Apache config for example) an environment variable APPLICATION_ENV set to 'tests'. The DI container can then inspect this environment variable to serve the right dependency, in our case a connection pointing to a test database.
(It don't like that production code is aware of that it might be in a test scenario, but it's a lesser evil, and it can be alleviated by using config files that have the environment variable in the filename for example. I digress.)
When this is done, in my tests I then have a reference to the same test database which I then use to set up some predictable test data with some thing like DbUnit or test-db-acle (disclaimer, I wrote the latter one, so I tend towards that).
Once the test data is in place, I issue a request to the url in question with something like Guzzle (in various important variations) and record the output(s), which I then use to make a test that sets the behaviour 'in stone'.
NOTE: this is a very imprecise method, the best way for tests is in a test-driven way, but this is not a solution in this case obviously.
However, if you are reasonable sure that some manual testing has been done on this, then this is a good way to at least get a few smoke tests into place. In my experience the value of having even one rudimentary smoke test in place is infinitely more valuable than not having any.
Now that you have a smoke test in place, you can refactor this part of the application and move to a new one.
I hope this help....

What do you test with Unit tests?

I am new to unit testing. Suppose I am building a web application. How do I know what to test? All the examples that you see are some sort of basic sum function that really has no real value, or at least I've never written a function to add to inputs and then return!
So, my question...on a web application, what are the sort of things that need tested?
I know that this is a broad question but anything will be helpful. I would be interested in links or anything that gives real life examples as opposed to concept examples that don't have any real life usage.
Take a look at your code, especially the bits where you have complex logic with loops, conditionals, etc, and ask yourself: How do I know if this works?
If you need to change the complex logic to take into account other corner cases then how do you know that the changes you introduce don't break the existing cases? This is precisely what unit testing is intended to address.
So, to answer your question about how it applies to web applications: suppose you have some code that lays out the page differently depending on the browser. One of your customers refuses to upgrade from IE6 and insists that you support that. So you unit test your layout code by simulating the connection string from IE6 and checking that the layout is what you expect.
A customer tells you they've found a security hole where using a particular cookie will give you administrator access. How do you know that you've fixed the bug and it doesn't happen again? Create a unit test for it, and run the unit tests on a daily basis so that you get an early warning if it fails.
You discover a bug where users with accents in their names get corrupted in the database. Abstract out the webform input from the database layer and add unit tests to ensure that (eg) UTF8 encoded data is stored in the database correctly and can be retrieved.
You get the idea. Anywhere where part of the process has a well-defined input and output is ideal for unit testing. Anything that doesn't is ideal for refactoring until it is well defined. Take a look at projects such as WebUnit, HTMLUnit, XMLUnit, CSSUnit.
The first part of testing is to write testable applications. Separate out as much functionality as possible from the UI. Refactor into smaller methods. Learn about dependency injection, and try using that to create methods that can take simple, throw-away input that produces known (and therefor testable) results. Look at mocking tools.
Infrastructure and data layer code is easiest to test.
Look at behavior-driven testing as well as test-driven design. For my money, behavior testing is better than pure unit testing; you can follow use-cases, so that tests match against expected usage patterns.
Unit testing means testing any unit of work, the smallest units of work are methods and functions., The art of unit testing is to define tests for a function that cannot just be checked by inspection, what unit test aims at is to test every possible functional requirement of a method.
Consider for example you have a login function, then there could be following tests that you could write for failures:
1. Does the function fail on empty username and password
2. Does the function fail on the correct username but the wrong password
3. Does the function fail on the correct password but the wrong username
The you would also write tests that the function would pass:
1. Does the function pass on correct username and password
This is just a basic example but this is what unit testing attempts to achieve, testing out things that may have been overlooked during development.
Then there is a purist approach too where a developer is first supposed to write tests and then the code to pass those tests (aka test driven development).
Resources:
http://devzone.zend.com/article/2772
http://www.ibm.com/developerworks/library/j-mocktest.html
If you're new to TDD, may I suggest a quick trip into the world of BDD? My experience is that the language really helps people pick up TDD more quickly. Particularly, I point you at this article, in which Dan North suggests "what to test":
http://blog.dannorth.net/introducing-bdd/
Note for transparency: I may be heavily involved in the BDD movement.
Regarding the classes to unit test in a web-app, I'd consider starting with controllers, domain objects if they have complex behaviour, and anything called "service", "manager", "helper" or "util". Please also try renaming any classes like this so that they are less generic and actually say what they do. Classes called "calculator" or "converter" are also good candidates, and you'll probably find more in the same package / folder.
There are a couple of good books which could help you too:
Martin Fowler, "Refactoring"
Michael Feathers, "Working Effectively with Legacy Code"
Good luck!
If you start out saying, "How do I test my web app?" that is biting off a lot at once, and it's going to be hard to see unit testing as providing any kind of benefit. I got into unit testing by starting with small pieces that were isolated, then writing libraries test-first, and only then building whole applications that were testable.
Generally a web app has a domain model, it has data access objects that do queries on a database and return domain objects, it has services that call the data access objects, and it has controllers that accept http requests and call the services.
Tests for the controllers will check that they call the right service method with the right parameters. Service objects can be mocks injected during test setup.
Tests for the services will check that they call the right data access objects and perform whatever logic they need to be performing. Data access objects can be mocks injected during test setup.
Tests for the data access objects will check that they perform the right database operation (query or update or whatever) by checking the contents of the database before and after. For dao tests you'll need a database, and a tool like DBUnit to pre-populate it before the test. Also your domain objects' getters and setters will get exercised with this test so you won't need a separate test for them.
Tests for the domain model will check that whatever domain logic you have encoded in them works (Sometimes you may not have any). If you design your domain model so it is not coupled to the database then the more logic you put in the domain model the better because it's easy to test. You shouldn't need any mocks for these tests.
For a web app the kind of tests you need to do are slightly different. Unit tests are tests which test a particular component of your program. For a web app, you would need to test that forms accept/reject the right inputs, that all links point to the right place, that it can cope with unexpected inputs etc. I'd have a look at Selenium if I were you, I've used it extensively in testing a number of sites: Selenium HQ
I don't have experience of testing web apps, but speaking generally: you unit test the smallest 'chunks' of your program possible. That means you test each function on an individual basis. Anything on a larger scale becomes an integration test.
Of course, there are going to be methods so simple that its not worth your time to write a test for them, but on the whole aim to test as great a proportion of your code as possible.
A rule of thumb is that if it is not worth testing it is not worth writing.
However, some things are very difficult to test, so you have the do some cost benefit analysis on what you test. If you initially aim for 70% code coverage, you will be on the right track.

Mock Repository vs. Real Repository w/Mocked Data

I must be doing something fundamentally wrong.
I am implmenting my repositories and then testing them with mocked data. All is well.
Now I want to test my domain objects so I point them at mock repositories.
But I'm finding that I have to re-implement logic from the 'real' repositories into the mocks, or, create 'helper classes' that encapsulate the logic and interact with the repositories (real or mock), and then I have to test those too.
So what am I missing - why implement and test mock repositories when I could use the real ones with mocked data?
EDIT: To clarify, by 'mocked data' I do not hit the actual database. I have a 'DB mock layer' I can insert under the real repositories that returns known-data.
Maybe your mixing your abstractions. maybe some of the helpers and logic that your talking about should be in your domain objects. Your repositories should be implementing a CRUD interface. So your domain logic should be using 8 actions from the repository. Good retrieve and Bad retrieve (exception thrown) are only two of the eight. One test, is how is the Bad retrieve handled in the domain object. The rest of the tests should be testing a Good retrieve.
Unit testing is meant to minimize the tested element -> test data route, in order to reduce search time if a bug occurs - the bug is either in the routine you test or the test unit and nothing else (well, maybe the OS or the compiler... but these are very rare).
Now imagine you made a database accessor module. The module seems to work ok. Then you make a config reader that reads from the database. Instead of a mock database accessor module unit test, you use your real database module on top of mock config data in the database test unit. It still seems to work OK. Now you add a GUI layer that uses the config object returned by the config reader unit. Instead of mocking the "boiled" config data you use the real module which apparently works.
And now your GUI stops working for a certain value. Where is the error? In the GUI, in the config reader, in the database accessor or in the values in the mock database?
In other words, you're not just testing the code you're writing, you're testing the real repository system as well, and if a bug is found you have an additional layer to analyze.
why implement and test mock
repositories when I could use the real
ones with mocked data?
I guess it really comes down to how much code coverage you think you need to have. If you feel the repository is sound and you can use it for testing, go for it. Perhaps the repository unit tests can be run prior to the mock database unit tests, so that you can use the real repositories for testing and feel confident that the testing is still valid.

Integration testing - can it be done right?

I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?
What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?
An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.
To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.
What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?
IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,
Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
System integration testing is testing of a system end to end [a special case of integration testing]
If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.
For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.
In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.
Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.
Hope this helps, :)
ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...
What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.
Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.
There are several aspects of why it is so:
unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?
* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...
What I would do in order to test the
"real" application: making the http
request (either manually or automated)
with some ids from the database and
then look in the filesystem if the
file was written. Of course this
process could be automated, but still:
doesn´t that duplicate the test-logic?
Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c
your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.
Is this what is called an "integration
test"?
Yes indeed it is.
In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:
Class OutFilevalidator {
function isCorrect(fName, dataList) {
// open file read data and
// validation logic
}
You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications.
YouTube Video
Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.
The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.
There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.
Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.
To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.
You can of course use integration testing, as others pointed out.

architecture/design advise for a test program

I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.
I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.
I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.
I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.
use C++ unit testing framework , Read this for Detail and examples