Writing tests before writing code - unit-testing

As far as I understand TDD and BDD cycle is something like:
Start by writing tests
See them fail
Write code
Pass the tests
Repeat
The question is how do you write tests before you have any code? Should I create some kind of class skeletons or interfaces? Or have I misunderstood something?

You have the essence of it, but I would change one part of your description. You don't write tests before you write code - you write a test before you write code. Then - before writing any more tests - you write just enough code to get your test to pass. When it's passing, you look for opportunities to improve the code, and make the improvements while keeping your tests passing - and then you write your second test. The point is, you're focusing on one tiny bit of functionality at any given time. What is the next thing you want your program to do? Write a test for that, and nothing more. Get that test passing. Clean the code. What's the next thing you want it to do? Iterate until you're happy.
The thing is, if you write tests before writing code, you don't have that focus. It's one test at a time.

Yes, that is correct. If you check out Michael Hartl's book on Ruby on Rails (free for HTML viewing), you will see how he does this specifically. So to add on to what lared said, let's say your first job is to add a new button to a web page. Your process would look like this:
Write a test to look for the button on the page visually.
Verify that the test fails (there should not be a button present, therefore it should fail).
Write code to place button on the page.
Verify test passes.
TDD will save your bacon when you accidentally do something to your code that breaks an old test. For example, you change the button to a link accidentally. The test will fail and alert you to the problem.

If you are using a real programming language, (you know, with a compiler and all,) then yes, of course you have to write class skeletons or interfaces, otherwise your tests will not even compile.
If you are using a scripting language, then you do not even have to write skeletons or interfaces, because your test script will happily begin to run and will fail on the first non-existent class or method that it encounters.

The question is how do you write tests before you have any code? Should I create some kind of class skeletons or interfaces? Or have I misunderstood something?
To expand on a point that lared made in his comment:
Then you write tests, which fail because the classes/whatever doesn't exist, and then you write the minimal amount of code which makes them pass
One thing to remember with TDD is that the test you are writing is the first client of your code. Therefore I wouldn't worry about not having the classes or interfaces defined already - because as he pointed out, simply by writing code referencing classes that don't exist, you will get your first "Red" in the cycle - namely, your code won't compile! That's a perfectly valid test.
TDD can also mean Test Driven Design
Once you embrace this idea, you will find that writing the test first serves less as a simple "is this code correct" and more of a "is this code right" guideline, so you'll find that you actually end up producing production code that is not only correct, but well structured as well.
Now a video showing this process would be super, but I don't have one but I'll make a stab at an example. Note this is a super simple example and ignores up-front pencil and paper planning / real world requirements from business, which will often be the driving force behind your design process.
Anyway suppose we want to create a simple Person object that can store a person's name and age. And we'd like to do this via TDD so we know it's correct.
So we think about it for a minute, and write our first test (note: example using pseudo C# / pseudo test framework)
public void GivenANewPerson_TheirNameAndAgeShouldBeAsExpected()
{
var sut = new Person();
Assert.Empty(sut.Name);
Assert.Zero(sut.Age);
}
Straight away we have a failing test, this won't compile because the Person class doesn't exist. So you use your IDE to auto-create the class for you:
public class Person
{
public int Age {get;set;}
public string Name {get;set;}
}
OK, now you have a first passing test. But now as you look at that class, you realise that there is nothing to ensure a person's age is always positive (>0). Let's assert that this is the case:
public void GivenANegativeAgeValue_PersonWillRejectIt()
{
var sut = new Person();
Assert.CausesException(sut.Age = -100);
}
Well, that test fails so let's fix up the class:
public class Person
{
protected int age;
public int Age
{
get{return age;}
set{
if(value<=0)
{
throw new InvalidOperationException("Age must be a positive number");
}
age=value;
}
}
public string Name {get;set;}
}
But now you might say to yourself - OK, since I know that a person's age can never be <=0, why do I even bother creating a writable property - do I always want to have to write two statements, one to create a Person and another to set their Age? What if I forgot to do it in one part of my code? What if I created a Person in one part of my code, and then later on I tried to assign a variable that was negative to Age later on, in another module? Surely, Age must be an invariant of Person so let's fix this up:
public class Person
{
public Person(int age){
if (age<=0){
throw new InvalidOperationException("Age must be a positive number");
}
this.Age = age;
}
public int Age {get;protected set;}
public string Name {get;set;}
}
And of course you have to fix your tests because they are won't compile any more - and if fact now you realise that the second test is redundant and can be dropped!
public void GivenANewPerson_TheirNameAndAgeShouldBeAsExpected()
{
var sut = new Person(42);
Assert.Empty(sut.Name);
Assert.42(sut.Age);
}
And you will then probably go through a similar process with Name, and so on.
Now I know this seems like a terribly long-winded way of creating a class, but consider that you have basically designed this class from scratch with built-in defences against invalid state - for example you will never ever have to debug code like this:
//A Person instance, 6,000 lines and 3 modules away from where it was instantiated
john.Age = x; //Crash because x is -42
or
//A Person instance, reserialised from a message queue in another process
var someValue = 2015/john.Age; //DivideByZeroException because we forgot to assign john's age
For me, this is one of the key benefits of TDD, using it not only as a testing tool but as a design tool that makes you think about the production code you are implementing, and forcing you to consider how the classes you create could end up in invalid, application killing states, and how to guard against this, and helping you to write objects that are easy to use and don't require their consumers to understand how they work, but rather what they do.
Since any modern IDE worth it's salt will provide you with the opportunity to create missing classes / interfaces with a couple of keystrokes or mouse clicks, I believe it's well worth trying this approach.

TDD and BDD are different things that share a common mechanism. This shared mechanism that is that you write something that 'tests' something before you write the thing that does something. You then use the failures to guide/drive the development.
(
You write the tests by thinking about the problem you are trying to solve, and fleshing out the details by pretending that you have an ideal solution that you can test. You write your test to use your ideal solution. Doing this does all sorts of things like:
Discover names of things you need for your solution
Uncover interfaces for your things to make them easy to use
Experience failures with your things
...
A difference between BDD and TDD is that BDD is much more focused on the 'what' and the 'why', rather than the 'how'. BDD is very concerned about the appropriate use of langauge to describe things. BDD starts at a higher level of abstraction. When you get to areas where the detail overwhelms language then TDD is used as a tool to implement the detail.
This idea that you can choose to think of things and write about them at different levels of abstraction is key.
You write the 'tests' you need by choosing:
the appropriate langauage for your problem
the appropriate level of abstraction to explain your problem simply and clearly
an appropriate mechanism to call your functionality.

Related

test cases for unit testing

In my project I have seen that we have a mass of methods that test something. If you want to understand what goes on you should look throw all methods. When you have a class with 20 test methods it's challenging for you to find test case/cases in this mass of methods.
I have never seen using interfaces for defining test cases what you cover in you tests.
For example
puclic class A{
public SomeResult doSomething(Param param){
.....
}
..... some other methods
}
For this method there are 4 cases (for example);
check that method works as expected with null param
check that method throws runtime exception for some param's area
check that method returns expected result(normal case)
check something different
In our project for testing those cases , guys just create 4 method (they can be written on any order like 2 first cases present at the beginning of test class and the last second can be written at the end (200 lines of code below)). Also from the test's name is not always clear what test method checks.
Is it good way to describe the test cases in a interface in this way :
public interface ATestSpecification{
void doSomething_checkForNullParam();
void doSomething_checkExceptionForNotAllowedParam();
void doSomething_normalCase();
void doSomething_checkSomethingDifferent();
}
And the test class :
public class ATest implement ATestSpecification{
...
//implenent test cases , described in test specification
...
}
Since developer tests are essentially documentation and exist for the convenience of the developer(s) working on the code, I would recommend that you do away with that idea of creating interfaces for test methods--have never seen that before and am sorry to have seen it just now. The existence of those interfaces can only get in your way when you search the code for references to a method name or have your IDE display a call hierarchy on any method that you would want to find an example of how to use correctly. Don't put things in your own way.
In the case of tests, because they are documentation, I tend to diverge from the usual pattern for naming methods in Java. That is, I will abandon using camelCase in favor of all_lowercase_separated_by_underscores, which seems easier to read, generally. Thus I will have "should_do_something" or "ensure_whatever" so that the test case name helps me find what I might be looking for. Also, I would be less focused on testing methods and more focused on testing behavior--I know that sounds like splitting hairs, but that's the way I think of it. Figure out what the class needs to do and write those tests then implement using TDD. I usually don't feel the need to back-fill any tests if I use TDD or a close approximation thereof. Jimmy is completely correct about keeping your code focused and following SRP.
Hope that helps!
EDIT: naming conventions are always controversial--just pick one that works for you. it's come up here and here before.

The process of Unit Testing

OK, I know what a unit test is but I use it in some projects and not others...some clients don't know how its done and follow one convention...blah blah.
so here I am asking, how EXACTLY is a unit test process created?
I hear, and read, that you write the tests first then the functionality and write the tests for that functionality and also use code coverage to identify any "slips" of not having tests for that code which has not been covered.
So, lets use a simple example:
Requirement: "application must return the result of 2 numbers combined."
you and I know we would have a class, something like "Addition" and a method "Add" which returns an integer like so:
public class Addition
{
public int Add(int num1, int num2)
{
return num1 + num2;
}
}
But even before writing this class, how do you write tests first? What is your process? What do you do? What would the process be when you have that spec doc and going into development?
Many thanks,
Process you're referring to is called Test-Driven Development. Idea is simple and close to what you described; given functionality, you start writing code by writing test for this functionality. In your add example, before any working code is written you should have a simple test - a test that fails.
Failing Test
[Test]
public void TestAdd()
{
var testedClass = new Addition();
var result = testedClass.Add(1, 2);
Assert.AreEqual(result, 3);
}
This is a simple test for your .Add method, stating your expectations of the soon-to-be working code. Since you don't have any code just yet, this test will naturally fail (as it is supposed to - which is good).
Passing test
Next step is to write the most basic code that makes the test pass (naturally, the most basic code is return 3; but for this simple example this level of details is not necessary):
public int Add(int num1, int num2)
{
return num1 + num2;
}
This works and test passes. What you have at this point, is basic proof that your method works in the way you stated it in your assumptions/expectations (the test).
However, you might notice that this test is not a good one; it tests only one simple input data of many. Not to mention, in some cases one test might not be enough and even though you had initial requirements, testing might reveal more is needed (for example arguments validation or logging). This is the part when you go back to reviewing your requirements and writing tests, which leads us to...
Refactor
At this point you should refactor code you just wrote. And I'm talking both unit test methods code and tested implementation. Since Add method is fairly simple and there's not much you can improve in terms of adding two numbers, you can focus on making test better. For example:
add more test cases (or consider data driven testing)
make test name more descriptive
improve variables naming
extract magic numbers to constants
Like this:
[TestCase(0, 0, 0)]
[TestCase(1, 2, 3)]
[TestCase(1, 99, 100)]
public void Add_ReturnsSumOfTwoNumbers(int first, int second, int expectedSum)
{
var testedClass = new Addition();
var actualSum = testedClass.Add(first, second);
Assert.That(actualSum, Is.EqualTo(expectedSum));
}
Refactoring is topic worth it's own book (and there are many), so I won't go into details. The process we just went through is often referred to as Red-Green-Refactor (red indicating failing test, green - passing one), and it's part of TDD. Remember to rerun the test one more time in order to make sure refactoring didn't accidentally break anything.
This is how the basic cycle goes. You start with requirements, write failing test for it, write code to make test pass, refactor both code and tests, review requirements, proceed with next task.
Where to go from here?
Few useful resources that are good natural follow-up once you get to know the idea behind TDD (even from such brief explanation as presented in this post):
Test-Driven Development by example - introduction to TDD by Kent Beck himself
Test-Driven Development in Microsoft.NET - this book is great walkthrough especially if you're new to TDD. It explains concepts really well and contains comprehensive example of application development with TDD (including data access, web services and more)
Roy Osherove's TDD Kata - contains videos of people developing simple calculator API with TDD; many different languages, many different frameworks (definitely worth seeing)

TDD: Adding a method to test state

So, I'm starting to write some logic for a simple program (toy game on the side). You have a specific ship (called a setup) that is a ship + modules. You start with an empty setup based off a ship and then add modules to that setup. Ships also have a numbered array of module positions.
var setup = new Setup(ship); // ship is a stub (IShip) defined someplace else
var module = new Mock<IModule>().Object;
setup.AddModule(module, 1); // 1 = which position
So, this is the code in my test method. I now need to assert on this code. Well, I need a getter method right?
Assert.AreEqual(module, setup.GetModule(1));
This might sound really dumb and I'm worrying about nothing, but for some stupid reason I'm concerned with adding a method just to assert that a test passed.
Is this fine and is in fact part of the design process that TDD is pushing out? For instance I know I need an AddModule method because I want to test it, and the fact that this requires a GetModule method to test is simply an evolution of my design via TDD.
Or is this kind of a smell because I don't even know if I'll really need GetModule in my code and it will only be used in a test?
For example, adding a module is going to ultimately affect different stats of a setup (armor, shield, firepower, etc). The thing is those are going to be complex, and I wanted to start with a simple test. But in the end, those are the public attributes I care about -- a setup is defined by its stats, not by a list of modules.
Interesting question. I'm glad to hear you're writing the tests first.
If you let the design manifest itself through the tests, you're more likely to build only the parts you'll need. But is this the best design? Maybe not, but don't let that discourage you -- your add method works!
It may be too early to tell if you'll need the GetModule method later. For now, build up the functionality you need and go green, then slowly refactor it (going from red to green again) to get the design you want.
Part of evolving the design is to start with baby steps like a simple method and then grow into the complex stats (eventually dropping this method and changing the test) when enough supports it. When doing TDD, don't expect that the first test you write is targeting the ideal interface. It is OK to have some messiness that will get dropped as you evolve the design.
That being said, if you see no public purpose to the method, try to limit its visibility as much as is reasonable to the test code. Although even that should eventually go away as you get to build out the rest of the system and have something real to test as a side effect of the set method.
I would be wary of introducing a public method in my class that is only used for testing.
There are various ways how you could test this:
Reflection: The GetModule method is a private method in your class (this could also work if your 'stats' are private) and you can access it in your test method via reflection. This will work well, the only trouble is you will not get any compiler errors if you change the name of the private method or add / delete some variables (but, of course, your test will fail and you will know early)
Inheritance: The GetModule method could be protected (only inheritance visible) and your test class could inherit from the main class. This way your test class gets access to this method, but this is not really exposed to the outside world.
Assert the side-effect: This is where you really think about what it means to add a module to the system. If it is going to affect some 'stats' as you put it, you could write tests which assert that the stats are appropriately modified.

Should I change the naming convention for my unit tests?

I currently use a simple convention for my unit tests. If I have a class named "EmployeeReader", I create a test class named "EmployeeReader.Tests. I then create all the tests for the class in the test class with names such as:
Reading_Valid_Employee_Data_Correctly_Generates_Employee_Object
Reading_Missing_Employee_Data_Throws_Invalid_Employee_ID_Exception
and so on.
I have recently been reading about a different type of naming convention used in BDD. I like the readability of this naming, to end up with a list of tests something like:
When_Reading_Valid_Employee (fixture)
Employee_Object_Is_Generated (method)
Employee_Has_Correct_ID (method)
When_Reading_Missing_Employee (fixture)
An_Invalid_Employee_ID_Exception_Is_Thrown (method)
and so on.
Has anybody used both styles of naming? Can you provide any advice, benefits, drawbacks, gotchas, etc. to help me decide whether to switch or not for my next project?
The naming convention I've been using is:
functionName_shouldDoThis_whenThisIsTheSituation
For example, these would be some test names for a stack's 'pop' function
pop_shouldThrowEmptyStackException_whenTheStackIsEmpty
pop_shouldReturnTheObjectOnTheTopOfTheStack_whenThereIsAnObjectOnTheStack
Your second example (having a fixture for each logical "task", rather than one for each class) has the advantage that you can have different SetUp and TearDown logic for each task, thus simplifying your individual test methods and making them more readable.
You don't need to settle on one or the other as a standard. We use a mixture of both, depending on how many different "tasks" we have to test for each class.
I feel the second is better because it makes your unit tests more readable to others as long lines make the code look more difficult to read or make it more difficult to skim through. If you still feel there's any ambiguity as for what the test does, you can add comments to clarify this.
Part of the reasoning behind the 2nd naming convention that you reference is that you are creating tests and behavioural specifications at the same time. You establish the context in which things are happening and what should actually then happen within that context. (In my experience, the observations/test-methods often start with "should_," so you get a standard "When_the_invoicing_system_is_told_to_email_the_client," "should_initiate_connection_to_mail_server" format.)
There are tools that will reflect over your test fixtures and output a nicely formatted html spec sheet, stripping out the underscores. You end up with human-readable documentation that is in sync with the actual code (as long as you keep your test coverage high and accurate).
Depending on the story/feature/subsystem on which you're working, these specifications can be shown to and understood by non-programmer stakeholders for verification and feedback, which is at the heart of agile and BDD in particular.
I use second method, and it really helps with describing what your software should do. I also use nested classes to describe more detailed context.
In essence, test classes are contexts, which can be nested, and methods are all one line assertions. For example,
public class MyClassSpecification
{
protected MyClass instance = new MyClass();
public class When_foobar_is_42 : MyClassSpecification
{
public When_foobar_is_42() {
this.instance.SetFoobar( 42 );
}
public class GetAnswer : When_foobar_is_42
{
private Int32 result;
public GetAnswer() {
this.result = this.GetAnswer();
}
public void should_return_42() {
Assert.AreEqual( 42, result );
}
}
}
}
which will give me following output in my test runner:
MyClassSpecification+When_foobar_is_42+GetAnswer
should_return_42
I've been down the two roads you describe in your question as well as a few other... Your first alternative is pretty straight forward and easy to understand for most people. I personally like the BDD style (your second example) more because it isolates different contexts and groups observations on those contexts. Th only real downside is that it generates more code so starting to do it feels slightly more cumbersome until you see the neat tests. Also if you use inheritance to reuse fixture setup you want a testrunner that outputs the inheritance chain. Consider a class "An_empty_stack" and you want to reuse it so you then do another class: "When_five_is_pushed_on : An_empty_stack" you want that as output and not just "When_five_is_pushed_on". If your testrunner does not support this your tests will contain redundant information like: "When_five_is_pushed_on_empty_stack : An_empty_stack" just to make the output nice.
i vote for calling the test case class: EmployeeReaderTestCase and calling the methods() like http://xunitpatterns.com/Organization.html and http://xunitpatterns.com/Organization.html#Test%20Naming%20Conventions

Is this unit test excessive?

Given the following SUT, would you consider this unit test to be unnecessary?
**edit : we cannot assume the names will match, so reflection wouldn't work.
**edit 2 : in actuality, this class would implement an IMapper interface and there would be full blown behavioral (mock) testing at the business logic layer of the application. this test just happens to be the lowest level of testing that must be state based. I question whether this test is truly necessary because the test code is almost identical to the source code itself, and based off of actual experience I don't see how this unit test makes maintenance of the application any easier.
//SUT
public class Mapper
{
public void Map(DataContract from, DataObject to)
{
to.Value1 = from.Value1;
to.Value2 = from.Value2;
....
to.Value100 = from.Value100;
}
}
//Unit Test
public class MapperTest()
{
DataContract contract = new DataContract(){... } ;
DataObject do = new DataObject(){...};
Mapper mapper = new Mapper();
mapper.Map(contract, do);
Assert.AreEqual(do.Value1, contract.Value1);
...
Assert.AreEqual(do.Value100, contract.Value100);
}
i would question the construct itself, not the need to test it
[reflection would be far less code]
I'd argue that it is necessary.
However, it would be better as 100 separate unit tests, each that check one value.
That way, when you something go wrong with value65, you can run the tests, and immediately find that value65 and value66 are being transposed.
Really, it's this kind of simple code where you switch your brain off and forget about that errors happen. Having tests in place means you pick them up and not your customers.
However, if you have a class with 100 properties all named ValueXXX, then you might be better using an Array or a List.
It is not excessive. I'm sure not sure it fully focuses on what you want to test.
"Under the strict definition, for QA purposes, the failure of a UnitTest implicates only one unit. You know exactly where to search to find the bug."
The power of a unit test is in having a known correct resultant state, the focus should be the values assigned to DataContract. Those are the bounds we want to push. To ensure that all possible values for DataContract can be successfully copied into DataObject. DataContract must be populated with edge case values.
PS. David Kemp is right 100 well designed tests would be the most true to the concept of unit testing.
Note : For this test we must assume that DataContract populates perfectly when built (that requires separate tests).
It would be better if you could test at a higher level, i.e. the business logic that requires you to create the Mapper.Map() function.
Not if this was the only unit test of this kind in the entire app. However, the second another like it showed up, you'd see me scrunch my eyebrows and start thinking about reflection.
Not Excesive.
I agree the code looks strange but that said:
The beauty of unit test is that once is done is there forever, so if anyone for any reason decides to change that implementation for something more "clever" still the test should pass, so not a big deal.
I personally would probably have a perl script to generate the code as I would get bored of replacing the numbers for each assert, and I would probably make some mistakes on the way, and the perl script (or what ever script) would be faster for me.