Is there a cross-language TDD solution? - unit-testing

I want to write a simple colour management framework in C#, Java and AS3. I only want to write the unit tests once though, rather than recreating the unit tests in JUnit, FlexUnit and say NUnit.
I have in mind the idea of say an xml file that defines manipulations of "instance" and assertions based on the state of "instance" via setup, teardown and a set tests. Then to have a utility that can convert that XML into xUnit code, for an arbitrary number of xUnits. Before I start wasting time developing such a solution though, I want to make sure no similar solution already exists.

Would FIT/ Fitnesse be suitable for what you want?
FIT is an acceptance test framework rather than unit test framework, but from what you describe you would want to ensure that the three implementations have the same behavior rather than identical designs.
FIT has links to several languages

I think you are overcomplicating things... you might consider a scripting language that you can use against all 3. I know Ruby could be used to test Java via JRuby, and C# via IronRuby, but I don't know about AS3.
I have never needed to do this myself, but I imagine a dynamic language like Ruby could really let you do it without a lot of extra work.

As a side note, you could also try writing a compiler of sorts, much like FogCreek's (in)famous Wasabi language, then you could write both your code and tests in that language, and have the compiler do your work.... this of course would probably be overcomplicated, but I think it would be a lot better than attempting to define an XML test language... and potentially a lot more readable.

You could also check out Fitnesse with Slim, as Slim should be a lot more lightweight to implement for new languages (AS3). I guess it's more about acceptance/integration testing than unit testing, but it could be worth looking into.

Related

Should stubs be writtnen before or after unit tests?

I was discussing this with a friend, and while intuitively it would seem to make no difference, I was wondering if anyone here might come up with a good reason to write the stub before or after the unit tests.
I think it doesn't really matter, as long as in the end you have both so the code compiles (or, in the case of interpreted languages, doesn't raise a method not found error).
However, since this is test-driven development, it makes sense to focus on these and write them first. Sometimes, even if I have a clear idea of how I want to structure my code, only while writing the tests do I realise that the API is useless and end up changing it. Writing the stubs after having written the tests makes this more easy.
If you are doing TDD, you will write your tests first and concurrently implement any required test infrastructure such as stubs so that you will be able to do state-based verification on the stubs after you exercise your test subject (also known as System Under Test or SUT).
Hard for me to envision writing stubs before the tests that will need them, but maybe that could work better for you--anyway, it can be difficult to grasp TDD as a style of working, so my advice is to keep at it and figure out what works best for you.
Personally, I hardly ever think of stubs anyway, as testing with mocks is they way I like to work (behavior-based verification rather than state-based). You don't mention what tools you are using; perhaps a good toolkit for mock objects isn't readily available for your tools (although I doubt you'd have trouble finding something for most languages now). Still, I can think of quite a few mock object libraries for, say, Java work and really I only have patience for one or two of them, so again it makes sense to experiment and discover what you like best.

Platform neutral unit testing?

I'm curious if there is a platform neutral way of defining unit tests. Consider the task of defining some unit tests for a new set implementation with code bases in both Java and C#. For example, we might want to test that {3, 4, 5} intersect {4} is {4}. Instead of writing this unit test twice (once in our Java project and once in our C# project) it would be nice to define the test once (maybe in XML?) and then have each runtime read and execute that test automatically. Some work would need to be done in each language to define how a set should be instantiated and populated, but it seems that we would only have to configure that detail once which seems reasonable when the payoff would be not duplicating n unit tests.
Does anyone know of a language or framework with this goal?
There are several issues with potential framework for such platform neutral unit testing:
mapping between pieces of C# and Java code (as in, such tool would need to know which Java/C# method or class to test given certain say XML configuration file)
differences between external libraries (even if you can make your own code in such way that translation could be easily achievable, external libraries providers might not - and most likely won't even care)
languages differencies itselves
Sure, perhaps one can overcome those issues with fairly advanced and thought-through configuration file. But then another question is raised - is it still unit testing what we are doing? Unit test is strictly bound to code it is testing (being written in same language springs to mind as one of the firsts), there's really no smart way to go agains this rule. Writing "unit test" in XML..? That doesn't sound right.
There's this easy thing to remember - unit testing should be plain and simple process. Approach you're asking for would most likely make it overcomplicated. Writing tests in same language is straightforward - after all, you are supposed to be able to write implementation as well. Introducing extra framework introduces the need to learn it, maintain it and deal with possible new problems it might cause.
Overall, problem with tools targeting different platforms is always the same - there will be cases, when such tool is too general or platform limitations are too specific. And that's when you'll have to fallback to platform language/tool itself. So, even though it might sound appealing for very simple cases (like your intersect one), overall I don't think it's worth the effort. You'll end up doing stuff in mixed way anyways.
I have developed something along the lines you describe that enables you to define your tests at a high level and then generates the test harness source code in either C or C++.
http://www.apollo-systems.co.uk/dev/products/
Whether I extend it to generate C# and JAVA depends on demand. It is aimed primarily at the embedded systems market where C and C++ are the preferred languages.
The test specification is stored in XML and I have defined a minimum set of operations to allow you to do something useful. The motivation in developing this is to allow you to focus on what you want to test rather than writing test code.

Writing "unit testable" code?

What kind of practices do you use to make your code more unit testing friendly?
TDD -- write the tests first, forces
you to think about testability and
helps write the code that is actually
needed, not what you think you may
need
Refactoring to interfaces -- makes
mocking easier
Public methods virtual if not using
interfaces -- makes mocking easier
Dependency injection -- makes mocking
easier
Smaller, more targeted methods --
tests are more focused, easier to
write
Avoidance of static classes
Avoid singletons, except where
necessary
Avoid sealed classes
Dependency injection seems to help.
Write the tests first - that way, the tests drive your design.
Use TDD
When writing you code, utilise dependency injection wherever possible
Program to interfaces, not concrete classes, so you can substitute mock implementations.
Make sure all of your classes follow the Single Responsibility Principle. Single responsibility means that each class should have one and only one responsibility. That makes unit testing much easier.
I'm sure I'll be down voted for this, but I'm going to voice the opinion anyway :)
While many of the suggestions here have been good, I think it needs to be tempered a bit. The goal is to write more robust software that is changeable and maintainable.
The goal is not to have code that is unit testable. There's a lot of effort put into making code more "testable" despite the fact that testable code is not the goal. It sounds really nice and I'm sure it gives people the warm fuzzies, but the truth is all of those techniques, frameworks, tests, etc, come at a cost.
They cost time in training, maintenance, productivity overhead, etc. Sometimes it's worth it, sometimes it isn't, but you should never put the blinders on and charge ahead with making your code more "testable".
When writing tests (as with any other software task) Don't Repeat Yourself (DRY principle). If you have test data that is useful for more then one test then put it someplace where both tests can use it. Don't copy the code into both tests. I know this seems obvious but I see it happen all the time.
I use Test-Driven Development whenever possible, so I don't have any code that cannot be unit tested. It wouldn't exist unless the unit test existed first.
The easiest way is don't check in your code unless you check in tests with it.
I'm not a huge fan of writing the tests first. But one thing I believe very strongly in is that code must be checked in with tests. Not even an hour or so before, togther. I think the order in which they are written is less important as long as they come in together.
Small, highly cohesive methods. I learn it the hard way. Imagine you have a public method that handles authentication. Maybe you did TDD, but if the method is big, it will be hard to debug. Instead, if that #authenticate method does stuff in a more pseudo-codish kind of way, calling other small methods (maybe protected), when a bug shows up, it's easy to write new tests for those small methods and find the faulty one.
And something that you learn the first thing in OOP, but so many seems to forget: Code Against Interfaces, Not Implementations.
Spend some time refactoring untestable code to make it testable. Write the tests and get 95% coverage. Doing that taught me all I need to know about writing testable code. I'm not opposed to TDD, but learning the specifics of what makes code testable or untestable helps you to think about testability at design time.
Don't write untestable code
1.Using a framework/pattern like MVC to separate your UI from you
business logic will help a lot.
2. Use dependency injection so you can create mock test objects.
3. Use interfaces.
Check up this talk Automated Testing Patterns and Smells.
One of the main take aways for me, was to make sure that the UnitTest code is in high quality. If the code is well documented and well written, everyone will be motivated to keep this up.
No Statics - you can't mock out statics.
Also google has a tool that will measure the testability of your code...
I'm continually trying to find a process where unit testing is less of a chore and something that I actually WANT to do. In my experience, a pretty big factor is your tools. I do a lot of ActionScript work and sadly, the tools are somewhat limited, such as no IDE integration and lack of more advanced mocking frameworks (but good things are a-coming, so no complaints here!). I've done test driven development before with more mature testing frameworks and it was definately a more pleasurable experience, but still felt like somewhat of a chore.
Recently however I started writing code in a different manner. I used to start with writing the test, watching them fail, writing code to make the test succeed, rinse and repeat and all that.
Now however, I start with writing interfaces, almost no matter what I'm going to do. At first I of course try to identify the problem and think of a solution. Then I start writing the interfaces to get a sort of abstract feel for the code and the communication. At that point, I usually realize that I haven't really figured out a proper solution to the problem at all as a result of me not fully understanding the problem. So I go back, revise the solution and revise my interfaces. When I feel that the interfaces reflect my solution, I actually start with writing the implementation, not the tests. When I have something implemented (draft implementationd, usually baby steps), I start testing it. I keep going back between testing and implementing, a few steps forward at a time. Since I have interfaces for everything, it's incredibly easy to inject mocks.
I find working like this, with classes having very little knowledge of other implementation and only talking to interfaces, is extremely liberating. It frees me from thinking about the implementation of another class and I can focus on the current unit. All I need to know is the contract that the interface provides.
But yeah, I'm still trying to work out a process that works super-fantastically-awesomely-well every time.
Oh, I also wanted to add that I don't write tests for everything. Vanilla properties that don't do much but get/set variables are useless to test. They are garuanteed by the language contract to work. If they don't I have way worse problems than my units not being testable.
To prepare your code to be testable:
Document your assumptions and exclusions.
Avoid large complex classes that do more than one thing - keep the single responsibility principle in mind.
When possible, use interfaces to decouple interactions and allow mock objects to be injected.
When possible, make pubic method virtual to allow mock objects to emulate them.
When possible, use composition rather than inheritance in your designs - this also encourages (and supports) encapsulation of behaviors into interfaces.
When possible, use dependency injection libraries (or DI practices) to provide instances with their external dependencies.
To get the most out of your unit tests, consider the following:
Educate yourself and your development team about the capabilities of the unit testing framework, mocking libraries, and testing tools you intend to use. Understanding what they can and cannot do will be essential when you actually begin writing your tests.
Plan out your tests before you begin writing them. Identify the edge cases, constraints, preconditions, postconditions, and exclusions that you want to include in your tests.
Fix broken tests as near to when you discover them as possible. Tests help you uncover defects and potential problems in your code. If your tests are broken, you open the door to having to fix more things later.
If you follow a code review process in your team, code review your unit tests as well. Unit tests are as much a part of your system as any other code - reviews help to identify weaknesses in the tests just as they would for system code.
You don't necessarily need to "make your code more unit testing friendly".
Instead, a mocking toolkit can be used to make testability concerns go away.
One such toolkit is JMockit.

Practical refactoring using unit tests

Having just read the first four chapters of Refactoring: Improving the Design of Existing Code, I embarked on my first refactoring and almost immediately came to a roadblock. It stems from the requirement that before you begin refactoring, you should put unit tests around the legacy code. That allows you to be sure your refactoring didn't change what the original code did (only how it did it).
So my first question is this: how do I unit-test a method in legacy code? How can I put a unit test around a 500 line (if I'm lucky) method that doesn't do just one task? It seems to me that I would have to refactor my legacy code just to make it unit-testable.
Does anyone have any experience refactoring using unit tests? And, if so, do you have any practical examples you can share with me?
My second question is somewhat hard to explain. Here's an example: I want to refactor a legacy method that populates an object from a database record. Wouldn't I have to write a unit test that compares an object retrieved using the old method, with an object retrieved using my refactored method? Otherwise, how would I know that my refactored method produces the same results as the old method? If that is true, then how long do I leave the old deprecated method in the source code? Do I just whack it after I test a few different records? Or, do I need to keep it around for a while in case I encounter a bug in my refactored code?
Lastly, since a couple people have asked...the legacy code was originally written in VB6 and then ported to VB.NET with minimal architecture changes.
For instructions on how to refactor legacy code, you might want to read the book Working Effectively with Legacy Code. There's also a short PDF version available here.
Good example of theory meeting reality. Unit tests are meant to test a single operation and many pattern purists insist on Single Responsibilty, so we have lovely clean code and tests to go with it. However, in the real (messy) world, code (especially legacy code) does lots of things and has no tests. What this needs is dose of refactoring to clean the mess.
My approach is to build tests, using the Unit Test tools, that test lots of things in a single test. In one test, I may be checking the DB connection is open, changing lots of data, and doing a before/after check on the DB. I inevitably find myself writing helper classes to do the checking, and more often than not those helpers can then be added into the code base, as they have encapsulated emergent behaviour/logic/requirements. I don't mean I have a single huge test, what I do mean is mnay tests are doing work which a purist would call an integration test - does such a thing still exist? Also I've found it useful to create a test template and then create many tests from that, to check boundary conditions, complex processing etc.
BTW which language environment are we talking about? Some languages lend themselves to refactoring better than others.
From my experience, I'd write tests not for particular methods in the legacy code, but for the overall functionality it provides. These might or might not map closely to existing methods.
Write tests at what ever level of the system you can (if you can), if that means running a database etc then so be it. You will need to write a lot more code to assert what the code is currently doing as a 500 line+ method is going to possibly have a lot of behaviour wrapped up in it. As for comparing the old versus the new, if you write the tests against the old code, they pass and they cover everything it does then when you run them against the new code you are effectively checking the old against the new.
I did this to test a complex sql trigger I wanted to refactor, it was a pain and took time but a month later when we found another issue in that area it was worth having the tests there to rely on.
In my experience this is the reality when working on Legacy code. Book (Working with Legacy..) mentioned by Esko is an excellent work which describes various approaches which can take you there.
I have seen similar issues with out unit-test itself which has grown to become system/functional test. Most important thing to develop tests for Legacy or existing code is to define the term "unit". It can be even functional unit like "reading from database" etc. Identify key functional units and maintain tests which adds value.
As an aside, there was recent talk between Joel S. and Martin F. on TDD/unit-tests. My take is that it is important to define unit and keep focus on it! URLS: Open Letter, Joel's transcript and podcast
That really is one of the key problems of trying to refit legacy code. Are you able to break the problem domain down to something more granular? Does that 500+ line method make anything other than system calls to JDK/Win32/.NET Framework JARs/DLLs/assemblies? I.e. Are there more granular function calls within that 500+ line behemoth that you could unit test?
The following book: The Art of Unit Testing contains a couple of chapters with some interesting ideas on how to deal with legacy code in terms of developing Unit Tests.
I found it quite helpful.

Why do I need a mocking framework for my unittests?

Recently there has been quite some hype around all the different mocking frameworks in the .NET world. I still haven't quite grasped what is so great about them. It doesn't seem to be to hard to write the mocking objects I need myself. Especially with the help of Visual Studio I quickly can write a class that implements the interface I want to mock (it auto-generates almost everything for me) and then write an implementation for the method(s) I need for my test. Done! Why going through the hassle of understanding a mocking framework for the sole purpose of saving a few lines of code. Or is a mocking framework not only about saving lines of code?
Once I finally got the hang of mock objects, I realized that they're essential for unit testing for the same reason that double blind testing or control groups are essential for scientific trials: they isolate what you're actually testing.
If you're testing a class which has quite a bit of interaction via other interfaces, you not only save the lines of code on having to mock each and every interface, but you also gain the ability to do things like "throw an exception if an unexpected method is called" or "exception if these methods are called out of order". You can get remarkably sophisticated with mock frameworks, and though I'll quickly admit there's a large learning curve, when you get up to speed they'll help make your unit tests more thorough without being bloated.
You actually identified one of the key points of a mock framework in your question. The fact that you code the mocks yourself is not something the developer should be concerned with. The mocking frameworks give you implementations of interfaces programatically, plus they are functional (based on your setup of the mock).
What do you do if you are testing an ICustomerDAO, for example, and you want to test some method 14 times each with different outcomes? Implement 14 different classes manually? I doubt that anyone would want to do that.
Mocks give you the power to define what will happen with parts of your classes when you are not concerned with whether or not they will actually work, like throwing exceptions whenever you want them to, returning zero results and making sure you handle that correctly, etc...
They are a great unit testing tool.
Previous questions that may help:
What is a mock and when should you use it?
Mockist vs classical TDD
I find that using a mocking framework allows me to generate tests a lot faster and with better verification that what I expect to happen in the test actually is happengin. I have in the past implemented stubs or fakes myself. I found that I needed to generate stubs specific to the test that I wanted and this took a lot of time. I can create the same test much faster using a mocking framework. The good ones support the generation of fakes, stubs or mocks with straightforward syntax.
It takes a while to get the hang of it, I avoided it for a while but now wouldn't try to work without a mocking framework for the reasons #Chamelaeon states.
Roy Osherove had a poll about Mock Frameworks and down in the comment section, there is a discussion (albeit brief) about whether one needs a Mock Framework or not.
I personally have been manually doing exactly as you stated and it has worked well enough, but this has mainly been out of habit rather than a closely-held opinion on mock frameworks in general.
Well I certainly don't think that you NEED a mocking framework. It's a framework like any other, and it's ultimately designed to save you some time and effort. You can also do things like roll your own common data structures like stacks and queues, but isn't it generally easier to just use the ones built into the class libraries that ship with the compiler/IDE of your language of choice?
I'm sure there are other compelling reasons for using mocking frameworks, though I'd leave it to the TDD and unit testing gurus to answer.
For the same reason you wouldn't try to write unit tests without NUnit. A mocking framework will assist you in verifying state and behavior over hundreds of unit tests. It's worth the 2 weeks or so of pain to get up to speed and really helps you focus on what needs to be tested.
One thing that troubles me about a mocking framework is that "what a function should o/p given an i/p" via
when(mock.someMethod("some arg")).thenReturn("something");
statement is spread across many unit test classes.
Let me elaborate with an example. Lets say there was a DAO Interface function getEmp(int EmpID) which was returning an Employee Object when passed an Employee ID as a parameter. Assume that this function was being mocked by 10 different unit test classes. Now if in the future, this function were changed to return a newer version of the Employee Object, one would have to go to each of the 10 different classes to update this change.
The disadvantages are as follows...
a) I don't know how to figure out all the classes which mock this function so that I can go update this change.
b) My existing test cases which consumes the mock DAO object continue to be blissfully unaware of the changes that have happened to the DAO Interface because the mock has not changed and hence continue to be green.
Ideally, if I were to have coded a single mock class myself and consumed it everywhere, I would have just one place to update for the newer version of the Employee object. Also, once I update at this one place, all my existing test cases which consume the mock would break and I would then know exactly what places I need to go and do an update for the new Employee Object.
Any thoughts on my views..
One of the good things about a mocking framework is that it allows setting expectations on the objects being mocked. With the expectations I can then set up all sorts of conditions to exercise the code thats being tested.
An isolation framework or mocking framework allows you to test the code you want, without its dependencies. It makes for short running tests, allows you to debug quickly, and easily build a safety net of tests around the code. Different frameworks have different features, and as said before - it's a tool, and you should select the right tool for the job.
I've use rhino mocks for a mocking framework. I and 5 other developers used it on a large enterprise application that was an 8 month project. We used tdd on the project. Was it worth it? I guess. Was there such a massive huge selling point to using mocks that I have to use it on every project? In my opinion, no. It is not something that is necessary, it is just a tool that you can use if you want to try it out. Some projects you can roll out your own mock classes as some here say they prefer - it is easier. Other projects are larger and may require a mocking framework. The key word (in my opinion) is MAY require... how much code coverage do you require? To me, that is another consideration to using mocks. The project I did with tdd/rhino mocks we were required to have 80% code coverage so the mocks helped us attain that. If our code coverage requirements were less, for example 40%, we probably would have not used a mocking framework and just wrote our own mock classes as others mention they do.