I'm a controls developer and a relative newbie to unit testing. Almost daily, I fight the attitude that you cannot test controls because of the UI interaction. I'm producing a demonstration control to show that it's possible to dramatically reduce manual testing if the control is designed to be testable. Currently I've got 50% logic coverage, but I think I could bump that up to 75% or higher if I could find a way to test some of the more complicated parts.
For example, I have a class with properties that describe the control's state and a method that generates a WPF PathGeometry object made of several segments. The implementation looks something like this:
internal PathGeometry CreateOuterGeometry()
{
double arcRadius = OuterCoordinates.Radius;
double sweepAngle = OuterCoordinates.SweepAngle;
ArcSegment outerArc = new ArcSegment(...);
LineSegment arcEndToCenter = new LineSegment(...);
PathFigure fig = new PathFigure();
// configure figure and add segments...
PathGeometry outerGeometry = new PathGeometry();
outerGeometry.Figures.Add(fig);
return outerGeometry;
}
I've got a few other methods like this that account for a few hundred blocks of uncovered code, an extra 25% coverage. I originally planned to test these methods, but rejected the notion. I'm still a unit testing newbie, and the only way I could think of to test the code would be several methods like this:
void CreateOuterGeometry_AngleIsSmall_ArcSegmentIsCorrect()
{
ClassUnderTest classUnderTest = new ClassUnderTest();
// configure the class under test...
ArcSegment expectedArc = // generate expected Arc...
PathGeometry geometry = classUnderTest.CreateOuterGeometry()
ArcSegment arc = geometry.Figures.Segments[0];
Assert.AreEqual(expectedArc, arc)
}
The test itself looks fine; I'd write one for each expected segment. But I had some problems:
Do I need tests to verify "Is the first segment an ArcSegment?" In theory the test tests this, but shouldn't each test only test one thing? This sounds like two things.
The control has at least six cases for calculation and four edge cases; this means for each method I need at least ten tests.
During development I changed how the various geometries were generated several times. This would cause me to have to rewrite all of the tests.
The first problem gave me pause because it seemed like it might inflate the number of tests. I thought I might have to test things like "Were there x segments?" and "Is segment n the right type?", but now that I've thought more I see that there's no branching logic in the method so I only need to do those tests once. The second problem made me more confident that there would be much effort associated with the test. It seems unavoidable. The third problem compounds the first two. Every time I changed the way the geometry was calculated, I'd have to edit an estimated 40 tests to make them respect the new logic. This would also include adding or removing tests if segments were added or removed.
Because of these three problems, I opted to write an application and manual test plan that puts the control in all of the interesting states and asks the user to verify it looks a particular way. Was this wrong? Am I overestimating the effort involved with writing the unit tests? Is there an alternative way to test this that might be easier? (I'm currently studying mocks and stubs; it seems like it'd require some refactoring of the design and end up being approximately as much effort.)
Use dependency injection and mocks.
Create interfaces for ArcSegmentFactory, LineSegmentFactory, etc., and pass a mock factory to your class. This way, you'll isolate the logic that is specific to this object (this should make testing easier), and won't be depending on the logic of your other objects.
About what to test: you should test what's important. You probably have a timeline in which you want to have things done, and you probably won't be able to test every single thing. Prioritize stuff you need to test, and test in order of priority (considering how much time it will take to test). Also, when you've already made some tests, it gets much easier to create new tests for other stuff, and I don't really see a problem in creating multiple tests for the same class...
About the changes, that's what tests are for: allowing you to change and don't really fear your change will bring chaos to the world.
You might try writing a control generation tool that generates random control graphs, and test those. This might yield some data points that you might not have thought of.
In our project, we use JUnit to perform tests which are not, strictly speaking, unit tests. We find, for example, that it's helpful to hook up a blank database and compare an automatic schema generated by Hibernate (an Object-Relational Mapping tool) to the actual schema for our test database; this helps us catch a lot of issues with wrong database mappings. But in general... you should only be testing one method, on one class, in a given test method. That doesn't mean you can't do multiple assertions against it to examine various properties of the object.
My approach is to convert the graph into a string (one segment per line) and compare this string to an expected result.
If you change something in your code, tests will start to fail but all you need to do is to check that the failures are in the right places. Your IDE should offer a side-by-side diff for this.
When you're confident that the new output is correct, just copy it over the old expected result. This will make sure that a mistake won't go unnoticed (at least not for long), the tests will still be simple and they are quick to fix.
Next, if you have common path parts, then you can put them into individual strings and build the expected result of a test from those parts. This allows you to avoid repeating yourself (and if the common part changes, you just have to update a single place for all tests).
If I understand your example correctly, you were trying to find a way to test whether a whole bunch of draw operations produce a given result.
Instead of human eyes, you could have produced a set of expected images (a snapshot of verified "good" images), and created unit tests which use the draw operations to create the same set of images and compare the result with an image comparison. This would allow you to automate the testing of the graphic operations, which is what I understand your problem to be.
The textbook way to do this would be to move all the business logic to libraries or controllers which are called by a 1 line method in the GUI. That way you can unit test the controller or library without dealing with the GUI.
Related
I'm currently getting to grips with Lucene indexing, and scratching my head about the "correct" approach if using TDD.
To do this you have to create an IndexWriter, produce an index based on a simple collection of texts (Strings), which are tokenized, stemmed, etc.
To look up a query in this index you have to create a DirectoryReader, make a Query, get the hits, etc.
So in the course of making my first test class (JUnit 4) I did each of these steps, one by one, making a new test method for each step in this process which ultimately produces a non-zero number of hits, if all goes well.
The problem I have is that the last test method does ALL these steps: clears the index directory, creates an IndexWriter, makes the index, etc. etc. and finally counts the hits. This last method can ultimately be tripped up by anything along the way going wrong. Furthermore, the previous methods all seem redundant...! And no "mocking" opportunities seem to present themselves...
Is this a candidate for a "test suite"? How does a TDD Pro develop "appropriate" testing for a situation like this?
It sounds to me that the last test method you refer to is more of an integration test than a unit test. Unit tests should only test a very specific unit of code (usually a method or part of a method) and should mock out any interactions with other classes, shouldn't touch a database, filesystem, or anything outside of the specific unit of code under test. It sounds like the first tests you created are unit tests and last test is an integration test. It's ok to have some redundency between the two types of tests since the unit test are testing very specific pieces of logic and integration testing test the interaction between those pieces.
Scenario: I need to write a complex nHibernate query, that would return projected DTO, but I want to use TDD approach. The method would look like this:
public PrintDTO GetUsersForPrinting(int userId)
{
Session.QueryOver<User>().//some joins, conditions etc.
//returns projected dto
}
Questions:
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
If I am using in memory db can I write Unit tests?
Is one test is enough?
Since my integration test probably will check projection, how should I name it? "GetUserForPrinting_return_correct_DTO" seems too abstract and silly.
I ask because:
There is lots of abstract information about TDD and integration testing, but when it comes to concrete implementation it is very difficult to apply that information.
TDD suggests that integration test should be made of unit tests:
This is not really a very good problem to learn TDD with. I assume you don't already know what the complex query looks like, and you want to use test-driven techniques to drive it out. Awesome :)
But let's see if I can answer your questions.
Yes
any test that includes a real db, whether it is in-memory or on-disk, is not a unit test. A unit test would use a mock db.
Maybe - if you query is complex enough, then no.
testGetUsersForPrinting or getUsersForPrintingTest or similar
Most probably I would drive out the query in a SQL interpreter, not in code. The aim would be to produce a series of integration tests against an in-memory db based on what I learn during this process.
Start from the minimum possible DTO you can think of, and build up from there.
Finally convert the query into nhibernate calls, then make the integration tests pass.
Test-driven, but not really unit-test-driven.
If you are willing to accept maximum TDD discipline and deal with working slower and being more annoyed than usual, you can automate each integration test as you develop it and write code to make it pass. This will mean you are switching frequently among 3 levels of abstraction / editors / environments (direct SQL queries, integration tests, c# code) - I deal with this by setting up techniques to force myself to follow the right steps each time.
This last bit is why this is not a good problem to learn TDD with. You will need a lot of discipline you probably haven't forced yourself to acquire yet!
Good luck.
ok some concrete examples. I would modify your code sample to look like this
public PrintDTO GetUsersForPrinting(int userId, ISession session)
{
var data = session.QueryOver<User>().//some joins, conditions etc.
return data; // or whatever
}
In your unit test you would write
public testDTO()
{
//Arrange
StubSession session = .... setup a stub session, which returns hardcoded values
// Act
PrintDTO users = GetUsersForPrinting(111, session);
// Assert
Assert.That(users.size(), Is.EqualTo(1));
Assert.That(users.get(0).userId, Is.EqualTo(111));
}
In your integration test, you would use a real db, and your session object would actually connect to it, and the queries would be resolved against that db
Arrange-Act-Assert is a standard method for organizing unit tests.
Generally you want as few Asserts as possible in a unit test. And you will have multiple unit tests.
When you are writing a unit test, start by writing the Assert, then fill in the rest to make it compile/get the result you want. Make the test fail first, because then you know you have really delivered something when it passes.
In this example to implement a stub ISession you would derive a local StubSession class (only visible to the test suite) from ISession and just fill in the absolute minimum to get it to compile, and return the minimum data to get the test to pass.
To build up to your whole DTO - assuming you know what you want in your DTO - proceed, as you say in the comments, incrementally. Build up each part of your DTO
a piece at a time, add a unit test for each piece.
Keeping track of this is another piece of TDD discipline.
Set yourself up with a TODO list - just a simple text file, or possibly a lengthy comments at the start of your test suite. List all the things you want to test e.g. zero results, one result, two results, 20 results. User id, whatever other pieces of information you need to have.
If you are doing a complex query across tables or whatever add an todo item for each join, each part of the where clause, etc.
Add items for ordering and paging etc if you are using those.
Pick the simplest things first. Only do one small thing (in a single red-green-refactor cycle) at a time. As you work through your list, you might want to break items up into smaller pieces, or you might think of additional things you need to do. Add them to the TODO list rather than working directly on them.
In this particular case I would swap - after each red-green-refactor cycle - into the SQL environment and/or the sqlite integration test to work out how to make the next piece work. I guess this is a sort of step between red and green - choose what you will test next, write the test (which fails obviously), fiddle around in SQL until you know how to make it pass, write the nHibernate calls to make your test green, then refactor.
Be aware some of the things you list might run out not to be necessary, or take too long, etc. It's good to write them down still, so you know what are not doing as well as what you are doing. Keep focused on your goal.
I tend to also develop a list of "smells" and/or refactorings that I can see I will want to do but am not quite ready for this cycle. Remember to minimise duplication/refactor your tests as well as your SUT (System Under Test).
It's a doing rather then seeing thing. The list of what unit tests you end up with, and the code they exercise, is not a very good description of the journey. Kent Beck's original TDD book is slim and will give you some good overall pointers, but not really about constructing queries.
Does any of that help?
Since the most common approach is to use in memory database for this kind of operations. Should I write integration test?
Using in memory database still is an integration test (because it actually tests if your query generates correct SQL and execute it against a database, see).
If I am using in memory db can I write Unit tests?
No, it would be an integration test
Is one test is enough?
Probably not, you should check each condition of your query, for example one test per one where clause, one for paging and one for sorting if applicable.
Since my integration test probably will check projection, how should I
name it? "GetUserForPrinting_return_correct_DTO" seems too abstract
and silly.
GivenUserForPrinting_WhenGetUserForPrinting_ThenMapToDTO would be a better naming
I am a heavy advocate of proper Test Driven Design or Behavior Driven Design and I love writing tests. However, I keep coding myself into a corner where I need to use 3-5 mocks in a particular test case for a single class. No matter which way I start, top down or bottom up I end up with a design that requires at least three collaborators from the highest level of abstraction.
Can somebody give good advice on how to avoid this pitfall?
Here's a typical scenario. I design a Widget that produces a Midget from a given text value. It always starts really simple until I get into the details. My Widget must interact with several hard to test things like file systems, databases, and the network.
So, instead of designing all that into my Widget I make a Bridget collaborator. The Bridget takes care of one half of the complexity, the database and network, allowing me to focus on the other half which is multimedia presentation. So, then I make a Gidget that performs the multimedia piece. The entire thing needs to happen in the background, so now I include a Thridget to make that happen. When all is said and done I end up with a Widget that hands work to a Thridget which talks over a Bridget to give its result to a Gidget.
Because I'm working in CocoaTouch and trying to avoid mock objects I use the self-shunt pattern where abstractions over collaborators become protocols that my test adopts. With 3+ collaborators my test balloons and become too complicated. Even using something like OCMock mock objects leaves me with an order of complexity that I'd rather avoid. I tried wrapping my brain around a daisy-chain of collaborators (A delegates to B who delegates to C and so on) but I can't envision it.
Edit
Taking an example from below let's assume we have an object that must read/write from sockets and present the movie data returned.
//Assume myRequest is a String param...
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
DataProcessor aProcessor = ...;
// This gets broken into a "Network" collaborator.
for(stuff in myRequest.charArray()) aOut.write(stuff);
Object Data = aIn.read(); // Simplified read
//This is our second collaborator
aProcessor.process(Data);
Now the above obviously deals with network latency so it has to be Threaded. This introduces a Thread abstraction to get us out of the practice of threaded unit tests. We now have
AsynchronousWorker myworker = getWorker(); //here's our third collaborator
worker.doThisWork( new WorkRequest() {
//Assume myRequest is a String param...
DataProcessor aProcessor = ...;
// Use our "Network" collaborator.
NetworkHandler networkHandler = getNetworkHandler();
Object Data = networkHandler.retrieveData(); // Simplified read
//This is our multimedia collaborator
aProcessor.process(Data);
})
Forgive me for working backwards w/o tests but I'm about to take my daughter outside and I'm rushing thru the example. The idea here is that I'm orchestrating the collaboration of several collaborators from behind a simple interface that will get tied to a UI button click event. So the outter-most test reflects a Sprint task that says given a "Play Movie" button, when it is clicked, the movie will play.
Edit
Lets discuss.
Having many mock objects shows that:
1) You have too much dependencies.
Re-look at your code and try to break it further down. Especially, try to separate data transformation and processing.
Since I don't have experience in the environment you are developing in. So let me give my own experience as example.
In Java socket, you will be given a set of InputStream and OutputStream simple so that you can read data from and send data to your peer. So your program looks like this:
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
// Read data
Object Data = aIn.read(); // Simplified read
// Process
if (Data.equals('1')) {
// Do something
// Write data
aOut.write('A');
} else {
// Do something else
// Write another data
aOut.write('B');
}
If you want to test this method, you have to ends up create mock for In and Out which may require quite a complicated classes behind them for supporting.
But if you look carefully, read from aIn and write to aOut can be separated from processing it. So you can create another class which will takes the read input and return output object.
public class ProcessSocket {
public Object process(Object readObject) {
if (readObject.equals(...)) {
// Do something
// Write data
return 'A';
} else {
// Do something else
// Write another data
return 'B';
}
}
and your previous method will be:
InputStream aIn = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();
ProcessSocket aProcessor = ...;
// Read data
Object Data = aIn.read(); // Simplified read
aProcessor.process(Data);
This way you can test the processing with little need for mock. you test can goes:
ProcessSocket aProcessor = ...;
assert(aProcessor.process('1').equals('A'));
Becuase the processing is now independent from input, output and even socket.
2) You are over unit testing by unit test what should be integration tested.
Some tests are not for unit testing (in the sense that it require unnecessarily more effort and may not efficiently get a good indicator). Examples of these kind of tests are those involving concurrency and user interfaces. They require different ways of testing than unit testing.
My advice would be that you further break them down (similar to the technique above) until some of them are unit-test suitable. So you have the little hard-to-test parts.
EDIT
If you believe you already broken it into very fine pieces, perhaps, that is your problem.
Software components or sub-components are related to each other in some way like characters are combined to words, words are combined to sentences, sentences to paragraphs, paragraphs to subsection, section, chapters and so on.
My example says, your should broken subsection to paragraphs and you things you already downs to words.
Look at it this way, most of the time, paragraphs are related to other paragraphs in a less loosely degree than sentences related (or depends on) other sentences. Subsection, section are even more loosely while words and characters are more dependent (as the grammatical rules kick in).
So perhaps, you are breaking it so fine that the language syntax force to those dependencies and in turn forcing you to have so much mock objects.
If that is the case, your solution is to balance the test. If a part are depended by many and it is require a complex set of mock object (or simple more effort to test it). May be you don't need to test it. For example, If A uses B,C uses B and B is so damn hard to test. So why don't you just test A+B as one and C+B as anther. In my example, if SocketProcessor is so hard to test, too hard to the point that you will spend more time testing and maintain the tests more than developing it then it is not worth it and I will just test the whole things at once.
Without seeing your code (and with the fact that I am never develop CocaoTouch) it will be hard to tell. And I may be able to provide good comment here. Sorry :D.
EDIT 2
See your example, it is pretty clear that you are dealing with integration issue. Assuming that you already test play movie and UI separatedly. It is understandable why you need so much mock objects. If this is the first time you use these kind of integration structure (this concurrent pattern), then those mock objects may actually be needed and there is nothing much you can do about it. That's all I can say :-p
Hope this helps.
My solution (not CocoaTouch) is to continue to mock the objects, but to refactory mock set up to a common test method. This reduces the complexity of the test itself while retaining the mock infrastructure to test my class in isolation.
I do some fairly complete testing, but it's automated integration testing and not unit-testing, so I have no mocks (except the user: I mock the end-user, simulating user-input events and testing/asserting whatever's output to the user): Should one test internal implementation, or only test public behaviour?
What I'm looking for is best practices using TDD.
Wikipedia describes TDD as,
a software development technique that
relies on the repetition of a very
short development cycle: First the
developer writes a failing automated
test case that defines a desired
improvement or new function, then
produces code to pass that test and
finally refactors the new code to
acceptable standards.
It then goes on to prescribe:
Add a test
Run all tests and see if the new one fails
Write some code
Run the automated tests and see them succeed
Refactor code
I do the first of these, i.e. "very short development cycle", the difference in my case being that I test after it's written.
The reason why I test after it's written is so that I don't need to "write" any tests at all, even the integration tests.
My cycle is something like:
Rerun all automated integration tests (start with a clean slate)
Implement a new feature (with refactoring of the existing code if necessary to support the new feature)
Rerun all automated integration tests (regression testing to ensure that new development hasn't broken existing functionality)
Test the new functionality:
a. End-user (me) does user input via the user interface, intended to exercise the new feature
b. End-user (me) inspects the corresponding program output, to verify whether the output is correct for the given input
When I do the testing in step 4, the test environment captures the user input and program output into a data file; the test environment can replay such a test in the future (recreate the user input, and assert whether the corresponding output is the same as the expected output captured previously). Thus, the test cases which were run/created in step 4 are added to the suite of all automated tests.
I think this gives me the benefits of TDD:
Testing is coupled with development: I test immediately after coding instead of before coding, but in any case the new code is tested before it's checked in; there's never untested code.
I have automated test suites, for regression testing
I avoid some costs/disadvantages:
Writing tests (instead I create new tests using the UI, which is quicker and easier, and closer to the original requirements)
Creating mocks (required for unit testing)
Editing tests when the internal implementation is refactored (because the tests depend only on the public API and not on the internal implementation details).
To get rid of excessive mocking you can follow the Test Pyramid which suggests having a lot of unit + component tests and a smaller number of slow & fragile system tests. It boils down to several simple rules:
Write tests at the lowest possible level that wouldn't require mocking. If you can write a unit test (e.g. parsing a string), then write it. But if you want to check whether the parsing is invoked by the upper layer, then this would require initializing more of the stuff.
Mock external systems. Your system needs to be a self-contained, independent piece. Relying on external apps (which would have their own bugs) would complicate testing a lot. Writing mocks/stubs is much easier.
After that have couple of tests checking your app with real integrations.
With this mindset you eliminate almost all mocking.
I have an app which draws a diagram. The diagram follows a certain schema,
for e.g shape X goes within shape Y, shapes {X, Y} belong to a group P ...
The diagram can get large and complicated (think of a circuit diagram).
What is a good approach for writing unit tests for this app?
Find out where the complexity in your code is.
separate it out from the untestable visual presentation
test it
If you don't have any non-visual complexity, you are not writing a program, you are producing a work of art.
Unless you are using a horribly buggy compiler or something, I'd avoid any tests that boil down to 'test source code does what it says it does'. Any test that's functionally equivalent to:
assertEquals (hash(stripComments(loadSourceCode())), 0x87364fg3234);
can be deleted without loss.
It's hard to write defined unit tests for something visual like this unless you really understand the exact sequence of API calls that are going to be built.
To test something "visual" like this, you have three parts.
A "spike" to get the proper look, scaling, colors and all that. In some cases, this is almost the entire application.
A "manual" test of that creates some final images to be sure they look correct to someone's eye. There's no easy way to test this except by actually looking at the actual output. This is hard to automate.
Mock the graphics components to be sure your application calls the graphics components properly.
When you make changes, you have to run both tests: Are the API calls all correct? and Does that sequence of API calls produce the image that looks right?
You can -- if you want to really burst a brain cell -- try to create a PNG file from your graphics and test to see if the PNG file "looks" right. It's hardly worth the effort.
As you go forward, your requirements may change. In this case, you may have to rewrite the spike first and get things to look right. Then, you can pull out the sequence of API calls to create automated unit tests from the spike.
One can argue that creating the spike violates TDD. However, the spike is designed to create a testable graphics module. You can't easily write the test cases first because the test procedure is "show it to a person". It can't be automated.
You might consider first converting the initial input data into some intermediate format, that you can test. Then you forward that intermediate format to the actual drawing function, which you have to test manually.
For example when you have a program that inputs percentages and outputs a pie chart, then you might have an intermediate format that exactly describes the dimensions and position of each sector.
You've described a data model. The application presumably does something, rather than just sitting there with some data in memory. Write tests which exercise the behaviour of the application and verify the outcome is what is expected.
I have a simple project, mostly consisting of back-end service code. I have this fully unit-tested, including my DAL layer...
Now I have to write the front-end. I re-use what business objects I can in my front-end, and at one point I have a grid that renders some output. I have my DAL object with some function called DisplayRecords(id) which displays the records for a given ID...
All of this DAL objects are unit tested. But is it worth it to write a unit test for the DisplayRecords() function? This function is calling a stored proc, which is doing some joins. This means that my unit-test would have to set-up multiple tables, one with 15 columns, and its return value is a DataSet (this is the only function in my DAL that returns a datset - because it wasnt worth it to create an object just for this one grid)...
Is stuff like this even worth testing? What about front-end logic in general - do people tend to skip unit tests for the ASP.NET front-end, similar to how people 'skip' the logic for private functions? I know the latter is a bit different - testing behavior vs implementation and all... but, am just curious what the general rule-of-thumb is?
Thanks very much
There are a few things that weigh into whether you should write tests:
It's all about confidence. You build tests so that you have confidence to make changes. Can you confidently make changes without tests?
How important is this code to the consumers of the application? If this is critical and central to everything, test it.
How embarrassing is it if you have regressions? On my last project, my goal was no regressions-- I didn't want the client to have to report the same bug twice. So every important bug got a test to reproduce it before it was fixed.
How hard is it to write the test? There are many tools that can help ease the pain:
Selenium is well understood and straightforward to set up. Can be a little expensive to maintain a large test suite in selenium. You'll need the fixture data for this to work.
Use a mock to stub out your DAL call, assuming its tested elsewhere. That way you can save time creating all the fixture data. This is a common pattern in testing Java/Spring controllers.
Break the code down in other ways simply so that it can be tested. For example, extract out the code that formats a specific grid cell, and write unit tests around that, independent of the view code or real data.
I tend to make quick Selenium tests and just sit and watch the app do its thing - that's a fast validation method which avoids all the manual clicking.
Fully automated UI testing is tedious and should IMO only be done in more mature apps where the UI won't change much. Regarding the 'in-between' code, I would test it if it is reused and/or complicated/ introducing new logic, but if its just more or less a new sequence of DAL method calls and specific to a single view I would skip it.