How to best do unit testing for a web application - unit-testing

I am writing a web application that is very complex in terms of UI and relies heavily on AJAX, DOM and image manipulations.
Is there any standard practice (I don't want tools) which can be followed to reduce the bugs?

A very simple technique is the smoke test, where you automate a click-through of all of your application. If the script runs and there are no errors anywhere in the logs, you have at least one defined level of quality.
This technique actually catches a fair amount of regression breaks and is much more effective than it sounds like. We use selenium for this.

Separate the logic and the UI portions - do not have all your business logic and complex code in the code behind page. Instead build them off the standard tier structure (data layer, business rules / logic layer, UI layer). This will ensure that your logic code you want to test does not reference the form, but instead uses classes that are easily unit tested.
For a very basic example, don't have code that does this:
string str = TextBox1.Text.ToString();
//do whatever your code does
TextBox2.Text = str;
Instead extract the logic into a separate class with a method:
TextBox2.Text = DoWork(TextBox1.Text.ToString());
public class Work
{
public string DoWork(string str)
{
//do work
return str2;
}
}
This way you can write unit tests to verify that DoWork is returning the correct values:
string return = DoWork("TestThisString");
Now all of your logic is unit testable, with only code that HAS to reference the page directly still in your UI layer.

Watin is a great tool for this.

A simple checklist (even on a piece of paper!) is the best way to make sure you never skip the important things. It's a good "smoke test" that nothing "standard" has been broken.

Related

TDD + DDD: Model abstractions

I've recently had an interesting experience but didn't find a satisfying answer so far: I'm a big fan of DDD and try to define rich domain objects with behavior and good information hiding, even if the team officially doesn't practice DDD. At the end of the day, it doesn't matter, as you have a well-defined object, which represents something in the problem domain.
That said, I would also like to practice TDD more. Unfortunately, if I test a service, which uses such rich domain models, the models are usually not abstracted. Therefore, to test the behavior of the service, I need to set up the model as well. This model comes with its own invariants etc., therefore with every service test, I also test the model the service is using.
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
In my opinion, there seems to be no way around this but to start creating interfaces for models. But it seems like I am the only person thinking so. For example, here is a big article, why this is an anti-pattern:
https://lostechies.com/jamesgregory/2009/05/09/entity-interface-anti-pattern/
I’m also not that too delighted to create interfaces for all models, as they should really represent something and adding another layer of abstraction just for testing seems like overkill. That said, what would be the best solution hereby? How are people on the field, which do combine DDD and TDD, handling this?
This seems like a big no-go, as I'm not only "not really unit-testing", but it's also troublesome to set up the tests, as the arrange-code gets large.
I think you can dismiss "not really unit-testing"; the important thing is to use tools that are fit for purpose, not the branding.
That said, troublesome to set up the tests is a legitimate concern, and all by itself sufficient excuse to look for a way to improve the design.
If your service were tightly coupled to some third party implementation, that offered no affordances for substitution, what would you do to decouple that from your tests? The usual answer would be to introduce a seam - a new design element between your code and the 3rd party code.
The two important characteristics of the seam:
it does afford substitution; which is to say, you have an interface.
the implementation of the interface that integrates with the third party code is "so simple there are obviously no deficiencies".
Then, in your tests, you introduce a substitute implementation.
The game with your "domain model" is exactly the same. Assuming that you are applying the usual lifecycle patterns, the seam includes a substitute for the repository and a substitute for the aggregate root entity.
Some good news - you only don't necessarily need to shadow the entire aggregate: only the parts of the interface that your service cares about. In effect, what you are doing is defining - for each service - the contract that describes the interactions between your service and the domain model. "Role interfaces" will be a useful search term here.
First I will make sure these two conditions are meet:
Domain models are POJOs
Domain layer isolation (other layers can access domain layer but not the way around)
Then Factory, Builder or TestHelpers can be used to bring models to desire state for tests.
Basics
Testing Scopes
Unit Testing
Integration Testing
Domain Models
These should be unit tests, which tests the Domain Models / Aggregate's methods.
Services
These should be integration tests, which tests the integration of Service methods and the associated models.
My Broad Approach
When you're testing your domain models, there may be many variance, that you'll need to account for in your unit tests.
When these then translate over to a requirement to use within an integration test, I tend to go for some sort of CreationFactory (or ArrangementFactory) for your domain models.
You can then use these in both sets of tests.
So for example...
public class ArrangeUser {
public static User ArrangeStandardUser() {
return new User(...standard...);
}
public static User ArrangeAdminUser() {
return new User(...admin...);
}
}
Then in your Unit Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
// Act
bool canDoSomething = standardUser.CanDoSomething();
// Assert
Assert.True(canDoSomething);
Then in your Integration Test...
// Arrange
User standardUser = ArrangeUser.StandardUser();
ServiceToTest service = new ServiceToTest(standardUser); // replace with some sort of Repository Mock or whatever suits.
// Act
var bool canDo = service.CanDoService();
// Assert
Assert.True(canDo);
This way you can test both the unit aspect, and the service aspect - by creation a common way to create the arrangements, without having to abstract out the entities and solves the problem of recreation the same thing over and over again.
NB. This is just a basic code demo than can be made more complex, based on the scenario, or your preferred test style.
I had a similar challenge and, together with my team, we created a tool that simplifies the test data arranging process by employing a random data generator: https://github.com/ocadotechnology/test-arranger. Especially take a look at:
How to organize tests with Test Arranger as it explicitly refers to the common DDD building blocks and explains how to arrange test data around them. In my case, following those recommendations resulted in a significant reduction in the amount and complexity of code for preparing the test data.
Custom Arrangers as it shows how to deal with the model invariants.
Besides the recommendations given on the test-arranger page, it is also handy to use Lombok's #Builder(toBuilder = true) (or an equivalent like Kotlin's copy method from data classes) on your domain classes. With the toBuilder method you can easily adjust randomly generated value objects and entities to the needs of a certain test case.

Unit/integration testing nHibenrate query

Scenario: I need to write a complex nHibernate query, that would return projected DTO, but I want to use TDD approach. The method would look like this:
public PrintDTO GetUsersForPrinting(int userId)
{
Session.QueryOver<User>().//some joins, conditions etc.
//returns projected dto
}
Questions:
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
If I am using in memory db can I write Unit tests?
Is one test is enough?
Since my integration test probably will check projection, how should I name it? "GetUserForPrinting_return_correct_DTO" seems too abstract and silly.
I ask because:
There is lots of abstract information about TDD and integration testing, but when it comes to concrete implementation it is very difficult to apply that information.
TDD suggests that integration test should be made of unit tests:
This is not really a very good problem to learn TDD with. I assume you don't already know what the complex query looks like, and you want to use test-driven techniques to drive it out. Awesome :)
But let's see if I can answer your questions.
Yes
any test that includes a real db, whether it is in-memory or on-disk, is not a unit test. A unit test would use a mock db.
Maybe - if you query is complex enough, then no.
testGetUsersForPrinting or getUsersForPrintingTest or similar
Most probably I would drive out the query in a SQL interpreter, not in code. The aim would be to produce a series of integration tests against an in-memory db based on what I learn during this process.
Start from the minimum possible DTO you can think of, and build up from there.
Finally convert the query into nhibernate calls, then make the integration tests pass.
Test-driven, but not really unit-test-driven.
If you are willing to accept maximum TDD discipline and deal with working slower and being more annoyed than usual, you can automate each integration test as you develop it and write code to make it pass. This will mean you are switching frequently among 3 levels of abstraction / editors / environments (direct SQL queries, integration tests, c# code) - I deal with this by setting up techniques to force myself to follow the right steps each time.
This last bit is why this is not a good problem to learn TDD with. You will need a lot of discipline you probably haven't forced yourself to acquire yet!
Good luck.
ok some concrete examples. I would modify your code sample to look like this
public PrintDTO GetUsersForPrinting(int userId, ISession session)
{
var data = session.QueryOver<User>().//some joins, conditions etc.
return data; // or whatever
}
In your unit test you would write
public testDTO()
{
//Arrange
StubSession session = .... setup a stub session, which returns hardcoded values
// Act
PrintDTO users = GetUsersForPrinting(111, session);
// Assert
Assert.That(users.size(), Is.EqualTo(1));
Assert.That(users.get(0).userId, Is.EqualTo(111));
}
In your integration test, you would use a real db, and your session object would actually connect to it, and the queries would be resolved against that db
Arrange-Act-Assert is a standard method for organizing unit tests.
Generally you want as few Asserts as possible in a unit test. And you will have multiple unit tests.
When you are writing a unit test, start by writing the Assert, then fill in the rest to make it compile/get the result you want. Make the test fail first, because then you know you have really delivered something when it passes.
In this example to implement a stub ISession you would derive a local StubSession class (only visible to the test suite) from ISession and just fill in the absolute minimum to get it to compile, and return the minimum data to get the test to pass.
To build up to your whole DTO - assuming you know what you want in your DTO - proceed, as you say in the comments, incrementally. Build up each part of your DTO
a piece at a time, add a unit test for each piece.
Keeping track of this is another piece of TDD discipline.
Set yourself up with a TODO list - just a simple text file, or possibly a lengthy comments at the start of your test suite. List all the things you want to test e.g. zero results, one result, two results, 20 results. User id, whatever other pieces of information you need to have.
If you are doing a complex query across tables or whatever add an todo item for each join, each part of the where clause, etc.
Add items for ordering and paging etc if you are using those.
Pick the simplest things first. Only do one small thing (in a single red-green-refactor cycle) at a time. As you work through your list, you might want to break items up into smaller pieces, or you might think of additional things you need to do. Add them to the TODO list rather than working directly on them.
In this particular case I would swap - after each red-green-refactor cycle - into the SQL environment and/or the sqlite integration test to work out how to make the next piece work. I guess this is a sort of step between red and green - choose what you will test next, write the test (which fails obviously), fiddle around in SQL until you know how to make it pass, write the nHibernate calls to make your test green, then refactor.
Be aware some of the things you list might run out not to be necessary, or take too long, etc. It's good to write them down still, so you know what are not doing as well as what you are doing. Keep focused on your goal.
I tend to also develop a list of "smells" and/or refactorings that I can see I will want to do but am not quite ready for this cycle. Remember to minimise duplication/refactor your tests as well as your SUT (System Under Test).
It's a doing rather then seeing thing. The list of what unit tests you end up with, and the code they exercise, is not a very good description of the journey. Kent Beck's original TDD book is slim and will give you some good overall pointers, but not really about constructing queries.
Does any of that help?
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
Using in memory database still is an integration test (because it actually tests if your query generates correct SQL and execute it against a database, see).
If I am using in memory db can I write Unit tests?
No, it would be an integration test
Is one test is enough?
Probably not, you should check each condition of your query, for example one test per one where clause, one for paging and one for sorting if applicable.
Since my integration test probably will check projection, how should I
name it? "GetUserForPrinting_return_correct_DTO" seems too abstract
and silly.
GivenUserForPrinting_WhenGetUserForPrinting_ThenMapToDTO would be a better naming

Do we need to unit test the GUI when using proper abstraction?

With a good design pattern like MVP, MVC, etc we aim to move all logic out of the GUI. That leaves us with a light weight GUI which ideally just need to "bind" its buttons and fields to properties in some business logic layer. This is a great approach as this layer will be free from GUI stuff, and we can easily write unit tests for it.
My question is: Is this enough? Or should we still unit test the GUI layer?
IMHO if you remove whole logic from GUI, you don't need to test it automatically. Of course you still need to run it to see if it looks like it should :)
This is about unit tests. For integration tests it is still good to test everything, e.g. by Selenium, if possible.
Sometime the GUI is not really that dumb. For instance there might be drag and drop support, custom components which display their content based on where they are placed and many more. In that case these things need to be specifically tested both in integration tests and individually in unit tests.
Most of the time the integration tests start from the UI layer and we end up testing a lot of UI layer in those scenarios as well. I once read a comment from someone about unit-testing that you don't need to write tests for code that can be easily broken for instance getters/setters can be easily broken (for example getter returns the value it is supposed to do and we can break it easily by not returning the value) so we don't end up writing unit tests for getter and setters unless there is some logic embedded in it (in which case these are not actually getter and setters).
So if the GUI is totally dumb and there is only bindings in it then unit tests are not required.

How to write tests without so many mocks?

I am a heavy advocate of proper Test Driven Design or Behavior Driven Design and I love writing tests. However, I keep coding myself into a corner where I need to use 3-5 mocks in a particular test case for a single class. No matter which way I start, top down or bottom up I end up with a design that requires at least three collaborators from the highest level of abstraction.
Can somebody give good advice on how to avoid this pitfall?
Here's a typical scenario. I design a Widget that produces a Midget from a given text value. It always starts really simple until I get into the details. My Widget must interact with several hard to test things like file systems, databases, and the network.
So, instead of designing all that into my Widget I make a Bridget collaborator. The Bridget takes care of one half of the complexity, the database and network, allowing me to focus on the other half which is multimedia presentation. So, then I make a Gidget that performs the multimedia piece. The entire thing needs to happen in the background, so now I include a Thridget to make that happen. When all is said and done I end up with a Widget that hands work to a Thridget which talks over a Bridget to give its result to a Gidget.
Because I'm working in CocoaTouch and trying to avoid mock objects I use the self-shunt pattern where abstractions over collaborators become protocols that my test adopts. With 3+ collaborators my test balloons and become too complicated. Even using something like OCMock mock objects leaves me with an order of complexity that I'd rather avoid. I tried wrapping my brain around a daisy-chain of collaborators (A delegates to B who delegates to C and so on) but I can't envision it.
Edit
Taking an example from below let's assume we have an object that must read/write from sockets and present the movie data returned.
//Assume myRequest is a String param...
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
DataProcessor aProcessor = ...;
// This gets broken into a "Network" collaborator.
for(stuff in myRequest.charArray()) aOut.write(stuff);
Object Data = aIn.read(); // Simplified read
//This is our second collaborator
aProcessor.process(Data);
Now the above obviously deals with network latency so it has to be Threaded. This introduces a Thread abstraction to get us out of the practice of threaded unit tests. We now have
AsynchronousWorker myworker = getWorker(); //here's our third collaborator
worker.doThisWork( new WorkRequest() {
//Assume myRequest is a String param...
DataProcessor aProcessor = ...;
// Use our "Network" collaborator.
NetworkHandler networkHandler = getNetworkHandler();
Object Data = networkHandler.retrieveData(); // Simplified read
//This is our multimedia collaborator
aProcessor.process(Data);
})
Forgive me for working backwards w/o tests but I'm about to take my daughter outside and I'm rushing thru the example. The idea here is that I'm orchestrating the collaboration of several collaborators from behind a simple interface that will get tied to a UI button click event. So the outter-most test reflects a Sprint task that says given a "Play Movie" button, when it is clicked, the movie will play.
Edit
Lets discuss.
Having many mock objects shows that:
1) You have too much dependencies.
Re-look at your code and try to break it further down. Especially, try to separate data transformation and processing.
Since I don't have experience in the environment you are developing in. So let me give my own experience as example.
In Java socket, you will be given a set of InputStream and OutputStream simple so that you can read data from and send data to your peer. So your program looks like this:
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
// Read data
Object Data = aIn.read(); // Simplified read
// Process
if (Data.equals('1')) {
// Do something
// Write data
aOut.write('A');
} else {
// Do something else
// Write another data
aOut.write('B');
}
If you want to test this method, you have to ends up create mock for In and Out which may require quite a complicated classes behind them for supporting.
But if you look carefully, read from aIn and write to aOut can be separated from processing it. So you can create another class which will takes the read input and return output object.
public class ProcessSocket {
public Object process(Object readObject) {
if (readObject.equals(...)) {
// Do something
// Write data
return 'A';
} else {
// Do something else
// Write another data
return 'B';
}
}
and your previous method will be:
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
ProcessSocket aProcessor = ...;
// Read data
Object Data = aIn.read(); // Simplified read
aProcessor.process(Data);
This way you can test the processing with little need for mock. you test can goes:
ProcessSocket aProcessor = ...;
assert(aProcessor.process('1').equals('A'));
Becuase the processing is now independent from input, output and even socket.
2) You are over unit testing by unit test what should be integration tested.
Some tests are not for unit testing (in the sense that it require unnecessarily more effort and may not efficiently get a good indicator). Examples of these kind of tests are those involving concurrency and user interfaces. They require different ways of testing than unit testing.
My advice would be that you further break them down (similar to the technique above) until some of them are unit-test suitable. So you have the little hard-to-test parts.
EDIT
If you believe you already broken it into very fine pieces, perhaps, that is your problem.
Software components or sub-components are related to each other in some way like characters are combined to words, words are combined to sentences, sentences to paragraphs, paragraphs to subsection, section, chapters and so on.
My example says, your should broken subsection to paragraphs and you things you already downs to words.
Look at it this way, most of the time, paragraphs are related to other paragraphs in a less loosely degree than sentences related (or depends on) other sentences. Subsection, section are even more loosely while words and characters are more dependent (as the grammatical rules kick in).
So perhaps, you are breaking it so fine that the language syntax force to those dependencies and in turn forcing you to have so much mock objects.
If that is the case, your solution is to balance the test. If a part are depended by many and it is require a complex set of mock object (or simple more effort to test it). May be you don't need to test it. For example, If A uses B,C uses B and B is so damn hard to test. So why don't you just test A+B as one and C+B as anther. In my example, if SocketProcessor is so hard to test, too hard to the point that you will spend more time testing and maintain the tests more than developing it then it is not worth it and I will just test the whole things at once.
Without seeing your code (and with the fact that I am never develop CocaoTouch) it will be hard to tell. And I may be able to provide good comment here. Sorry :D.
EDIT 2
See your example, it is pretty clear that you are dealing with integration issue. Assuming that you already test play movie and UI separatedly. It is understandable why you need so much mock objects. If this is the first time you use these kind of integration structure (this concurrent pattern), then those mock objects may actually be needed and there is nothing much you can do about it. That's all I can say :-p
Hope this helps.
My solution (not CocoaTouch) is to continue to mock the objects, but to refactory mock set up to a common test method. This reduces the complexity of the test itself while retaining the mock infrastructure to test my class in isolation.
I do some fairly complete testing, but it's automated integration testing and not unit-testing, so I have no mocks (except the user: I mock the end-user, simulating user-input events and testing/asserting whatever's output to the user): Should one test internal implementation, or only test public behaviour?
What I'm looking for is best practices using TDD.
Wikipedia describes TDD as,
a software development technique that
relies on the repetition of a very
short development cycle: First the
developer writes a failing automated
test case that defines a desired
improvement or new function, then
produces code to pass that test and
finally refactors the new code to
acceptable standards.
It then goes on to prescribe:
Add a test
Run all tests and see if the new one fails
Write some code
Run the automated tests and see them succeed
Refactor code
I do the first of these, i.e. "very short development cycle", the difference in my case being that I test after it's written.
The reason why I test after it's written is so that I don't need to "write" any tests at all, even the integration tests.
My cycle is something like:
Rerun all automated integration tests (start with a clean slate)
Implement a new feature (with refactoring of the existing code if necessary to support the new feature)
Rerun all automated integration tests (regression testing to ensure that new development hasn't broken existing functionality)
Test the new functionality:
a. End-user (me) does user input via the user interface, intended to exercise the new feature
b. End-user (me) inspects the corresponding program output, to verify whether the output is correct for the given input
When I do the testing in step 4, the test environment captures the user input and program output into a data file; the test environment can replay such a test in the future (recreate the user input, and assert whether the corresponding output is the same as the expected output captured previously). Thus, the test cases which were run/created in step 4 are added to the suite of all automated tests.
I think this gives me the benefits of TDD:
Testing is coupled with development: I test immediately after coding instead of before coding, but in any case the new code is tested before it's checked in; there's never untested code.
I have automated test suites, for regression testing
I avoid some costs/disadvantages:
Writing tests (instead I create new tests using the UI, which is quicker and easier, and closer to the original requirements)
Creating mocks (required for unit testing)
Editing tests when the internal implementation is refactored (because the tests depend only on the public API and not on the internal implementation details).
To get rid of excessive mocking you can follow the Test Pyramid which suggests having a lot of unit + component tests and a smaller number of slow & fragile system tests. It boils down to several simple rules:
Write tests at the lowest possible level that wouldn't require mocking. If you can write a unit test (e.g. parsing a string), then write it. But if you want to check whether the parsing is invoked by the upper layer, then this would require initializing more of the stuff.
Mock external systems. Your system needs to be a self-contained, independent piece. Relying on external apps (which would have their own bugs) would complicate testing a lot. Writing mocks/stubs is much easier.
After that have couple of tests checking your app with real integrations.
With this mindset you eliminate almost all mocking.

Should I unit-test my grid rendering logic?

I have a simple project, mostly consisting of back-end service code. I have this fully unit-tested, including my DAL layer...
Now I have to write the front-end. I re-use what business objects I can in my front-end, and at one point I have a grid that renders some output. I have my DAL object with some function called DisplayRecords(id) which displays the records for a given ID...
All of this DAL objects are unit tested. But is it worth it to write a unit test for the DisplayRecords() function? This function is calling a stored proc, which is doing some joins. This means that my unit-test would have to set-up multiple tables, one with 15 columns, and its return value is a DataSet (this is the only function in my DAL that returns a datset - because it wasnt worth it to create an object just for this one grid)...
Is stuff like this even worth testing? What about front-end logic in general - do people tend to skip unit tests for the ASP.NET front-end, similar to how people 'skip' the logic for private functions? I know the latter is a bit different - testing behavior vs implementation and all... but, am just curious what the general rule-of-thumb is?
Thanks very much
There are a few things that weigh into whether you should write tests:
It's all about confidence. You build tests so that you have confidence to make changes. Can you confidently make changes without tests?
How important is this code to the consumers of the application? If this is critical and central to everything, test it.
How embarrassing is it if you have regressions? On my last project, my goal was no regressions-- I didn't want the client to have to report the same bug twice. So every important bug got a test to reproduce it before it was fixed.
How hard is it to write the test? There are many tools that can help ease the pain:
Selenium is well understood and straightforward to set up. Can be a little expensive to maintain a large test suite in selenium. You'll need the fixture data for this to work.
Use a mock to stub out your DAL call, assuming its tested elsewhere. That way you can save time creating all the fixture data. This is a common pattern in testing Java/Spring controllers.
Break the code down in other ways simply so that it can be tested. For example, extract out the code that formats a specific grid cell, and write unit tests around that, independent of the view code or real data.
I tend to make quick Selenium tests and just sit and watch the app do its thing - that's a fast validation method which avoids all the manual clicking.
Fully automated UI testing is tedious and should IMO only be done in more mature apps where the UI won't change much. Regarding the 'in-between' code, I would test it if it is reused and/or complicated/ introducing new logic, but if its just more or less a new sequence of DAL method calls and specific to a single view I would skip it.