If unit-test names can become outdated over time and if you consider that the test itself is the most important thing, then is it important to choose wise test names?
ie
[Test]
public void ShouldValidateUserNameIsLessThan100Characters() {}
verse
[Test]
public void UserNameTestValidation1() {}
The name of any method should make it clear what it does.
IMO, your first suggestion is a bit long and the second one isn't informative enough. Also it's probably a bad idea to put "100" in the name, as that's very likely to change. What about:
public void validateUserNameLength()
If the test changes, the name should be updated accordingly.
Yes, the names are totally important, specially when you are running the tests in console or continuous integration servers. Jay Fields wrote a post about it.
Moreover, put good test names with one assertion per test and your suite will give you great reports when a test fails.
Very. Equally important as choosing good method and variable names.
Much more if your test suite is going to referred to by new devs in the future.
As for your original question, definitely Answer1. Typing in a few more characters is a small price to pay for
the readability. For you and others. It'll eliminate the 'what was I thinking here?' as well as 'WTF is this guy getting at in this test?'
Quick zoom in when you're in to fix something someone else wrote
instant update for any test-suite visitor. If done correctly, just going over the names of the test cases will inform the reader of the specs for the unit.
Yes.
[Test]
public void UsernameValidator_LessThanLengthLimit_ShouldValidate() {}
Put the test subject first, the test statement next, and the expected result last.
That way, you get a clear indication of what it is doing, and you can easily sort by name :)
In Clean Code, page 124, Robert C. Martin writes:
The moral of the story is simple: Test code is just as important as production code. It is not a second-class citizen. It requires thought, design, and care. It must be kept as clean as production code.
I think if one can not find a good concise name for a test method it's a sign that design of this test is incorrect. Also good method name helps you to find out what happened in less time.
Yes, the whole point of the test name is that it tells you what doesn't work when the test fails.
i wouldn't put conditions that test needs to meet in the name, because conditions may change in time. in your example, i'd recommend naming like
UserNameLengthValidate()
or
UserNameLengthTest()
or something similar to explain what the test does, but not presuming the testing/validation parameters.
Yes, the names of the code under test (methods, properties, whatever) can change, but I contend your existing tests should fail if the expectations change. That is the true value of having well-constructed tests, not perusing a list of test names. That being said, well named test methods are great tools for getting new developers on board, helping them locate "executable documentation" with which they can kick the tires of existing code -- so I would keep the names of test methods up to date just as I would keep the assertions made by the test methods up to date.
I name my test using the following pattern. Each test fixture attempts to focus on one class and is usually name {ClassUnderTest}Test. I name each test method {MemberUnderTest}_{Assertion}.
[TestFixture]
public class IndexableFileTest
{
[Test]
public void Connect_InitializesReadOnlyProperties()
{
// ...
}
[Test,ExpectedException(typeof(NotInitializedException))]
public void IsIndexable_ErrorWhenNotConnected()
{
// ...
}
[Test]
public void IsIndexable_True()
{
// ...
}
[Test]
public void IsIndexable_False()
{
// ...
}
}
Having a very descriptive name helps to instantly see what is not working correctly, so that you don't actually need to look at the unit test code.
Also, a list of all the unit tests describes the intended behavior of the unit, and can be used (more or less) as documentation to the behavior of the unit under test.
Note, this only works when unit tests are very specific and do not validate too much within one unit test.
So for example:
[Test]
void TestThatExceptionIsRaisedWhenStringLengthLargerThen100()
[Test]
void TestThatStringLengthOf99IsAccepted()
The name needs to matter within reason. I don't want an email from the build saying that test 389fb2b5-28ad3 failed, but just knowing that it was a UserName test as opposed to something else would help ensure the right person gets to do the diagnosis.
[RowTest]
[Row("GoodName")]
[Row("GoodName2")]
public void Should_validate_username()
{
}
[RowTest]
[Row("BadUserName")]
[Row("Bad%!Name")]
public void Should_invalidate_username()
{
}
This might make more sense for more complex types of validation really.
Yes, they are. I'd personally recommend looking at SSW's rules to better unit tests. It contains some very helpful naming guidelines.
Related
As far as I understand TDD and BDD cycle is something like:
Start by writing tests
See them fail
Write code
Pass the tests
Repeat
The question is how do you write tests before you have any code? Should I create some kind of class skeletons or interfaces? Or have I misunderstood something?
You have the essence of it, but I would change one part of your description. You don't write tests before you write code - you write a test before you write code. Then - before writing any more tests - you write just enough code to get your test to pass. When it's passing, you look for opportunities to improve the code, and make the improvements while keeping your tests passing - and then you write your second test. The point is, you're focusing on one tiny bit of functionality at any given time. What is the next thing you want your program to do? Write a test for that, and nothing more. Get that test passing. Clean the code. What's the next thing you want it to do? Iterate until you're happy.
The thing is, if you write tests before writing code, you don't have that focus. It's one test at a time.
Yes, that is correct. If you check out Michael Hartl's book on Ruby on Rails (free for HTML viewing), you will see how he does this specifically. So to add on to what lared said, let's say your first job is to add a new button to a web page. Your process would look like this:
Write a test to look for the button on the page visually.
Verify that the test fails (there should not be a button present, therefore it should fail).
Write code to place button on the page.
Verify test passes.
TDD will save your bacon when you accidentally do something to your code that breaks an old test. For example, you change the button to a link accidentally. The test will fail and alert you to the problem.
If you are using a real programming language, (you know, with a compiler and all,) then yes, of course you have to write class skeletons or interfaces, otherwise your tests will not even compile.
If you are using a scripting language, then you do not even have to write skeletons or interfaces, because your test script will happily begin to run and will fail on the first non-existent class or method that it encounters.
The question is how do you write tests before you have any code? Should I create some kind of class skeletons or interfaces? Or have I misunderstood something?
To expand on a point that lared made in his comment:
Then you write tests, which fail because the classes/whatever doesn't exist, and then you write the minimal amount of code which makes them pass
One thing to remember with TDD is that the test you are writing is the first client of your code. Therefore I wouldn't worry about not having the classes or interfaces defined already - because as he pointed out, simply by writing code referencing classes that don't exist, you will get your first "Red" in the cycle - namely, your code won't compile! That's a perfectly valid test.
TDD can also mean Test Driven Design
Once you embrace this idea, you will find that writing the test first serves less as a simple "is this code correct" and more of a "is this code right" guideline, so you'll find that you actually end up producing production code that is not only correct, but well structured as well.
Now a video showing this process would be super, but I don't have one but I'll make a stab at an example. Note this is a super simple example and ignores up-front pencil and paper planning / real world requirements from business, which will often be the driving force behind your design process.
Anyway suppose we want to create a simple Person object that can store a person's name and age. And we'd like to do this via TDD so we know it's correct.
So we think about it for a minute, and write our first test (note: example using pseudo C# / pseudo test framework)
public void GivenANewPerson_TheirNameAndAgeShouldBeAsExpected()
{
var sut = new Person();
Assert.Empty(sut.Name);
Assert.Zero(sut.Age);
}
Straight away we have a failing test, this won't compile because the Person class doesn't exist. So you use your IDE to auto-create the class for you:
public class Person
{
public int Age {get;set;}
public string Name {get;set;}
}
OK, now you have a first passing test. But now as you look at that class, you realise that there is nothing to ensure a person's age is always positive (>0). Let's assert that this is the case:
public void GivenANegativeAgeValue_PersonWillRejectIt()
{
var sut = new Person();
Assert.CausesException(sut.Age = -100);
}
Well, that test fails so let's fix up the class:
public class Person
{
protected int age;
public int Age
{
get{return age;}
set{
if(value<=0)
{
throw new InvalidOperationException("Age must be a positive number");
}
age=value;
}
}
public string Name {get;set;}
}
But now you might say to yourself - OK, since I know that a person's age can never be <=0, why do I even bother creating a writable property - do I always want to have to write two statements, one to create a Person and another to set their Age? What if I forgot to do it in one part of my code? What if I created a Person in one part of my code, and then later on I tried to assign a variable that was negative to Age later on, in another module? Surely, Age must be an invariant of Person so let's fix this up:
public class Person
{
public Person(int age){
if (age<=0){
throw new InvalidOperationException("Age must be a positive number");
}
this.Age = age;
}
public int Age {get;protected set;}
public string Name {get;set;}
}
And of course you have to fix your tests because they are won't compile any more - and if fact now you realise that the second test is redundant and can be dropped!
public void GivenANewPerson_TheirNameAndAgeShouldBeAsExpected()
{
var sut = new Person(42);
Assert.Empty(sut.Name);
Assert.42(sut.Age);
}
And you will then probably go through a similar process with Name, and so on.
Now I know this seems like a terribly long-winded way of creating a class, but consider that you have basically designed this class from scratch with built-in defences against invalid state - for example you will never ever have to debug code like this:
//A Person instance, 6,000 lines and 3 modules away from where it was instantiated
john.Age = x; //Crash because x is -42
or
//A Person instance, reserialised from a message queue in another process
var someValue = 2015/john.Age; //DivideByZeroException because we forgot to assign john's age
For me, this is one of the key benefits of TDD, using it not only as a testing tool but as a design tool that makes you think about the production code you are implementing, and forcing you to consider how the classes you create could end up in invalid, application killing states, and how to guard against this, and helping you to write objects that are easy to use and don't require their consumers to understand how they work, but rather what they do.
Since any modern IDE worth it's salt will provide you with the opportunity to create missing classes / interfaces with a couple of keystrokes or mouse clicks, I believe it's well worth trying this approach.
TDD and BDD are different things that share a common mechanism. This shared mechanism that is that you write something that 'tests' something before you write the thing that does something. You then use the failures to guide/drive the development.
(
You write the tests by thinking about the problem you are trying to solve, and fleshing out the details by pretending that you have an ideal solution that you can test. You write your test to use your ideal solution. Doing this does all sorts of things like:
Discover names of things you need for your solution
Uncover interfaces for your things to make them easy to use
Experience failures with your things
...
A difference between BDD and TDD is that BDD is much more focused on the 'what' and the 'why', rather than the 'how'. BDD is very concerned about the appropriate use of langauge to describe things. BDD starts at a higher level of abstraction. When you get to areas where the detail overwhelms language then TDD is used as a tool to implement the detail.
This idea that you can choose to think of things and write about them at different levels of abstraction is key.
You write the 'tests' you need by choosing:
the appropriate langauage for your problem
the appropriate level of abstraction to explain your problem simply and clearly
an appropriate mechanism to call your functionality.
In my project I have seen that we have a mass of methods that test something. If you want to understand what goes on you should look throw all methods. When you have a class with 20 test methods it's challenging for you to find test case/cases in this mass of methods.
I have never seen using interfaces for defining test cases what you cover in you tests.
For example
puclic class A{
public SomeResult doSomething(Param param){
.....
}
..... some other methods
}
For this method there are 4 cases (for example);
check that method works as expected with null param
check that method throws runtime exception for some param's area
check that method returns expected result(normal case)
check something different
In our project for testing those cases , guys just create 4 method (they can be written on any order like 2 first cases present at the beginning of test class and the last second can be written at the end (200 lines of code below)). Also from the test's name is not always clear what test method checks.
Is it good way to describe the test cases in a interface in this way :
public interface ATestSpecification{
void doSomething_checkForNullParam();
void doSomething_checkExceptionForNotAllowedParam();
void doSomething_normalCase();
void doSomething_checkSomethingDifferent();
}
And the test class :
public class ATest implement ATestSpecification{
...
//implenent test cases , described in test specification
...
}
Since developer tests are essentially documentation and exist for the convenience of the developer(s) working on the code, I would recommend that you do away with that idea of creating interfaces for test methods--have never seen that before and am sorry to have seen it just now. The existence of those interfaces can only get in your way when you search the code for references to a method name or have your IDE display a call hierarchy on any method that you would want to find an example of how to use correctly. Don't put things in your own way.
In the case of tests, because they are documentation, I tend to diverge from the usual pattern for naming methods in Java. That is, I will abandon using camelCase in favor of all_lowercase_separated_by_underscores, which seems easier to read, generally. Thus I will have "should_do_something" or "ensure_whatever" so that the test case name helps me find what I might be looking for. Also, I would be less focused on testing methods and more focused on testing behavior--I know that sounds like splitting hairs, but that's the way I think of it. Figure out what the class needs to do and write those tests then implement using TDD. I usually don't feel the need to back-fill any tests if I use TDD or a close approximation thereof. Jimmy is completely correct about keeping your code focused and following SRP.
Hope that helps!
EDIT: naming conventions are always controversial--just pick one that works for you. it's come up here and here before.
I currently use a simple convention for my unit tests. If I have a class named "EmployeeReader", I create a test class named "EmployeeReader.Tests. I then create all the tests for the class in the test class with names such as:
Reading_Valid_Employee_Data_Correctly_Generates_Employee_Object
Reading_Missing_Employee_Data_Throws_Invalid_Employee_ID_Exception
and so on.
I have recently been reading about a different type of naming convention used in BDD. I like the readability of this naming, to end up with a list of tests something like:
When_Reading_Valid_Employee (fixture)
Employee_Object_Is_Generated (method)
Employee_Has_Correct_ID (method)
When_Reading_Missing_Employee (fixture)
An_Invalid_Employee_ID_Exception_Is_Thrown (method)
and so on.
Has anybody used both styles of naming? Can you provide any advice, benefits, drawbacks, gotchas, etc. to help me decide whether to switch or not for my next project?
The naming convention I've been using is:
functionName_shouldDoThis_whenThisIsTheSituation
For example, these would be some test names for a stack's 'pop' function
pop_shouldThrowEmptyStackException_whenTheStackIsEmpty
pop_shouldReturnTheObjectOnTheTopOfTheStack_whenThereIsAnObjectOnTheStack
Your second example (having a fixture for each logical "task", rather than one for each class) has the advantage that you can have different SetUp and TearDown logic for each task, thus simplifying your individual test methods and making them more readable.
You don't need to settle on one or the other as a standard. We use a mixture of both, depending on how many different "tasks" we have to test for each class.
I feel the second is better because it makes your unit tests more readable to others as long lines make the code look more difficult to read or make it more difficult to skim through. If you still feel there's any ambiguity as for what the test does, you can add comments to clarify this.
Part of the reasoning behind the 2nd naming convention that you reference is that you are creating tests and behavioural specifications at the same time. You establish the context in which things are happening and what should actually then happen within that context. (In my experience, the observations/test-methods often start with "should_," so you get a standard "When_the_invoicing_system_is_told_to_email_the_client," "should_initiate_connection_to_mail_server" format.)
There are tools that will reflect over your test fixtures and output a nicely formatted html spec sheet, stripping out the underscores. You end up with human-readable documentation that is in sync with the actual code (as long as you keep your test coverage high and accurate).
Depending on the story/feature/subsystem on which you're working, these specifications can be shown to and understood by non-programmer stakeholders for verification and feedback, which is at the heart of agile and BDD in particular.
I use second method, and it really helps with describing what your software should do. I also use nested classes to describe more detailed context.
In essence, test classes are contexts, which can be nested, and methods are all one line assertions. For example,
public class MyClassSpecification
{
protected MyClass instance = new MyClass();
public class When_foobar_is_42 : MyClassSpecification
{
public When_foobar_is_42() {
this.instance.SetFoobar( 42 );
}
public class GetAnswer : When_foobar_is_42
{
private Int32 result;
public GetAnswer() {
this.result = this.GetAnswer();
}
public void should_return_42() {
Assert.AreEqual( 42, result );
}
}
}
}
which will give me following output in my test runner:
MyClassSpecification+When_foobar_is_42+GetAnswer
should_return_42
I've been down the two roads you describe in your question as well as a few other... Your first alternative is pretty straight forward and easy to understand for most people. I personally like the BDD style (your second example) more because it isolates different contexts and groups observations on those contexts. Th only real downside is that it generates more code so starting to do it feels slightly more cumbersome until you see the neat tests. Also if you use inheritance to reuse fixture setup you want a testrunner that outputs the inheritance chain. Consider a class "An_empty_stack" and you want to reuse it so you then do another class: "When_five_is_pushed_on : An_empty_stack" you want that as output and not just "When_five_is_pushed_on". If your testrunner does not support this your tests will contain redundant information like: "When_five_is_pushed_on_empty_stack : An_empty_stack" just to make the output nice.
i vote for calling the test case class: EmployeeReaderTestCase and calling the methods() like http://xunitpatterns.com/Organization.html and http://xunitpatterns.com/Organization.html#Test%20Naming%20Conventions
Given the following SUT, would you consider this unit test to be unnecessary?
**edit : we cannot assume the names will match, so reflection wouldn't work.
**edit 2 : in actuality, this class would implement an IMapper interface and there would be full blown behavioral (mock) testing at the business logic layer of the application. this test just happens to be the lowest level of testing that must be state based. I question whether this test is truly necessary because the test code is almost identical to the source code itself, and based off of actual experience I don't see how this unit test makes maintenance of the application any easier.
//SUT
public class Mapper
{
public void Map(DataContract from, DataObject to)
{
to.Value1 = from.Value1;
to.Value2 = from.Value2;
....
to.Value100 = from.Value100;
}
}
//Unit Test
public class MapperTest()
{
DataContract contract = new DataContract(){... } ;
DataObject do = new DataObject(){...};
Mapper mapper = new Mapper();
mapper.Map(contract, do);
Assert.AreEqual(do.Value1, contract.Value1);
...
Assert.AreEqual(do.Value100, contract.Value100);
}
i would question the construct itself, not the need to test it
[reflection would be far less code]
I'd argue that it is necessary.
However, it would be better as 100 separate unit tests, each that check one value.
That way, when you something go wrong with value65, you can run the tests, and immediately find that value65 and value66 are being transposed.
Really, it's this kind of simple code where you switch your brain off and forget about that errors happen. Having tests in place means you pick them up and not your customers.
However, if you have a class with 100 properties all named ValueXXX, then you might be better using an Array or a List.
It is not excessive. I'm sure not sure it fully focuses on what you want to test.
"Under the strict definition, for QA purposes, the failure of a UnitTest implicates only one unit. You know exactly where to search to find the bug."
The power of a unit test is in having a known correct resultant state, the focus should be the values assigned to DataContract. Those are the bounds we want to push. To ensure that all possible values for DataContract can be successfully copied into DataObject. DataContract must be populated with edge case values.
PS. David Kemp is right 100 well designed tests would be the most true to the concept of unit testing.
Note : For this test we must assume that DataContract populates perfectly when built (that requires separate tests).
It would be better if you could test at a higher level, i.e. the business logic that requires you to create the Mapper.Map() function.
Not if this was the only unit test of this kind in the entire app. However, the second another like it showed up, you'd see me scrunch my eyebrows and start thinking about reflection.
Not Excesive.
I agree the code looks strange but that said:
The beauty of unit test is that once is done is there forever, so if anyone for any reason decides to change that implementation for something more "clever" still the test should pass, so not a big deal.
I personally would probably have a perl script to generate the code as I would get bored of replacing the numbers for each assert, and I would probably make some mistakes on the way, and the perl script (or what ever script) would be faster for me.
Do you write one test per function/method, with multiple checks in the test, or a test for each check?
One test per check and super descriptive names, per instance:
#Test
public void userCannotVoteDownWhenScoreIsLessThanOneHundred() {
...
}
Both only one assertion and using good names gives me a better report when a test fails. They scream to me: "You broke THAT rule!".
I have a test per capability the function is offering. Each test may have several assertions, however.
The name of the testcase indicates the capability being tested.
Generally, for one function, I have several "sunny day" tests and one or a few "rainy day" scenario, depending of its complexity.
BDD (Behavior Driven Development)
Though I'm still learning, it's basically TDD organized/focused around how your software will actually be used... NOT how it will be developed/built.
Wikipedia
General Info
BTW as far as whether to do multiple asserts per test method I would recommend trying it both ways. Sometimes you'll see where one strategy left you in a bind and it'll start making sense why you normally just use one assert per method.
I think that the rule of single assertion is a little too strict. In my unit tests, I try to follow the rule of single group of assertions -- you can use more than one assertion in one test method, as long as you do the checks one after another (you don't change the state of tested class between the assertions).
So, in Python, I believe a test like this is correct:
def testGetCountReturnsCountAndEnd(self):
count, endReached = self.handler.getCount()
self.assertEqual(count, 0)
self.assertTrue(endReached)
but this one should be split into two test methods:
def testGetCountReturnsOneAfterPut(self):
self.assertEqual(self.handler.getCount(), 0)
self.handler.put('foo')
self.assertEqual(self.handler.getCount(), 1)
Of course, in case of long and frequently used groups of assertions, I like to create custom assertion methods -- these are especially useful for comparing complex objects.
A test case for each check. It's more granular. It makes it much easier to see what specific test case failed.
I write at least one test per method, and somtimes more if the method requires some different setUp to test the good cases and the bad cases.
But you should NEVER test more than one method in one unit test. It reduce the amount of work and error in fixing your test in case your API changes.
I would suggest a test case for every check.
The more you keep atomic, the better your results are!
Keeping multiple checks in a single tests will help you generate report for how much functionality needs to be corrected.
Keeping atomic test case will show you the overall quality !
In general one testcase per check. When tests are grouped around a particular function it makes refactoring (eg removing or splitting) that function more difficult because the tests also need a lot of changes. It is much better to write the tests for each type of behaviour that you want from the class. Sometimes when testing a particular behaviour it makes sense to have multiple checks per test case. However, as the tests become more complicated it makes them harder to change when something in the class changes.
In Java/Eclipse/JUnit I use two source directories (src and test) with the same tree.
If I have a src/com/mycompany/whatever/TestMePlease with methods worth testing (e.g. deleteAll(List<?> stuff) throws MyException) I create a test/com/mycompany/whatever/TestMePleaseTest with methods to test differente use case/scenarios:
#Test
public void deleteAllWithNullInput() { ... }
#Test(expect="MyException.class") // not sure about actual syntax here :-P
public void deleteAllWithEmptyInput() { ... }
#Test
public void deleteAllWithSingleLineInput() { ... }
#Test
public void deleteAllWithMultipleLinesInput() { ... }
Having different checks is simpler to handle for me.
Nonetheless, since every test should be consistent, if I want my initial data set to stay unaltered I sometimes have, for example, to create stuff and delete it in the same check to insure every other test find the data set pristine:
#Test
public void insertAndDelete() {
assertTrue(/*stuff does not exist yet*/);
createStuff();
assertTrue(/*stuff does exist now*/);
deleteStuff();
assertTrue(/*stuff does not exist anymore*/);
}
Don't know if there are smarter ways to do that, to tell you the truth...
I like to have a test per check in a method and have a meaningfull name for the test-method. For instance:
testAddUser_shouldThrowIllegalArgumentExceptionWhenUserIsNull
A testcase per check. If you name the method appropriately, it can provide valuable hint towards the problem when one of these tests cause a regression failure.
I try to separate out Database tests and Business Logic Tests (using BDD as others here recommend), running the Database ones first ensures your Database is in a good state before asking your application to play with it.
There's a good podcast show with Andy Leonard on what it involves and how to do it, and if you'd like a bit more information, I've written a blog post on the subject (shameless plug ;o)