What are some popular naming conventions for Unit Tests? [closed] - unit-testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
General
Follow the same standards for all tests.
Be clear about what each test state is.
Be specific about the expected behavior.
Examples
1) MethodName_StateUnderTest_ExpectedBehavior
Public void Sum_NegativeNumberAs1stParam_ExceptionThrown()
Public void Sum_NegativeNumberAs2ndParam_ExceptionThrown ()
Public void Sum_simpleValues_Calculated ()
Source: Naming standards for Unit Tests
2) Separating Each Word By Underscore
Public void Sum_Negative_Number_As_1st_Param_Exception_Thrown()
Public void Sum_Negative_Number_As_2nd_Param_Exception_Thrown ()
Public void Sum_Simple_Values_Calculated ()
Other
End method names with Test
Start method names with class name

I am pretty much with you on this one man. The naming conventions you have used are:
Clear about what each test state is.
Specific about the expected behaviour.
What more do you need from a test name?
Contrary to Ray's answer I don't think the Test prefix is necessary. It's test code, we know that. If you need to do this to identify the code, then you have bigger problems, your test code should not be mixed up with your production code.
As for length and use of underscore, its test code, who the hell cares? Only you and your team will see it, so long as it is readable, and clear about what the test is doing, carry on! :)
That said, I am still quite new to testing and blogging my adventures with it :)

This is also worth a read: Structuring Unit Tests
The structure has a test class per class being tested. That’s not so unusual. But what was unusual to me was that he had a nested class for each method being tested.
e.g.
using Xunit;
public class TitleizerFacts
{
public class TheTitleizerMethod
{
[Fact]
public void NullName_ReturnsDefaultTitle()
{
// Test code
}
[Fact]
public void Name_AppendsTitle()
{
// Test code
}
}
public class TheKnightifyMethod
{
[Fact]
public void NullName_ReturnsDefaultTitle()
{
// Test code
}
[Fact]
public void MaleNames_AppendsSir()
{
// Test code
}
[Fact]
public void FemaleNames_AppendsDame()
{
// Test code
}
}
}
And here is why:
Well for one thing, it’s a nice way to keep tests organized. All the
tests (or facts) for a method are grouped together. For example, if
you use the CTRL+M, CTRL+O shortcut to collapse method bodies, you can
easily scan your tests and read them like a spec for your code.
I also like this approach:
MethodName_StateUnderTest_ExpectedBehavior
So perhaps adjust to:
StateUnderTest_ExpectedBehavior
Because each test will already be in a nested class

I tend to use the convention of MethodName_DoesWhat_WhenTheseConditions so for example:
Sum_ThrowsException_WhenNegativeNumberAs1stParam
However, what I do see a lot is to make the test name follow the unit testing structure of
Arrange
Act
Assert
Which also follows the BDD / Gherkin syntax of:
Given
When
Then
which would be to name the test in the manner of: UnderTheseTestConditions_WhenIDoThis_ThenIGetThis
so to your example:
WhenNegativeNumberAs1stParam_Sum_ThrowsAnException
However I do much prefer putting the method name being tested first, because then the tests can be arranged alphabetically, or appear alphabetically sorted in the member dropdown box in VisStudio, and all the tests for 1 method are grouped together.
In any case, I like separating the major sections of the test name with underscores, as opposed to every word, because I think it makes it easier to read and get the point of the test across.
In other words, I like: Sum_ThrowsException_WhenNegativeNumberAs1stParam better than Sum_Throws_Exception_When_Negative_Number_As_1st_Param.

I do name my test methods like other methods using "PascalCasing" without any underscores or separators. I leave the postfix Test for the method out, cause it adds no value. That the method is a test method is indicated by the attribute TestMethod.
[TestMethod]
public void CanCountAllItems() {
// Test the total count of items in collection.
}
Due to the fact that each Test class should only test one other class i leave the name of the class out of the method name. The name of the class that contains the test methods is named like the class under test with the postfix "Tests".
[TestClass]
public class SuperCollectionTests(){
// Any test methods that test the class SuperCollection
}
For methods that test for exceptions or actions that are not possible, i prefix the test method with the word Cannot.
[TestMethod]
[ExpectedException(typeOf(ArgumentException))]
public void CannotAddSameObjectAgain() {
// Cannot add the same object again to the collection.
}
My naming convension are base on the article "TDD Tips: Test Naming Conventions & Guidelines" of Bryan Cook. I found this article very helpful.

The first set of names is more readable to me, since the CamelCasing separates words and the underbars separate parts of the naming scheme.
I also tend to include "Test" somewhere, either in the function name or the enclosing namespace or class.

As long as you follow a single practice, it doesn't really matter. Generally, I write a single unit test for a method that covers all the variations for a method (I have simple methods;) and then write more complex sets of tests for methods that require it. My naming structure is thus usually test (a holdover from JUnit 3).

I use a 'T' prefix for test namespaces, classes and methods.
I try to be neat and create folders that replicate the namespaces, then create a tests folder or separate project for the tests and replicate the production structure for the basic tests:
AProj
Objects
AnObj
AProp
Misc
Functions
AFunc
Tests
TObjects
TAnObj
TAnObjsAreEqualUnderCondition
TMisc
TFunctions
TFuncBehavesUnderCondition
I can easily see that something is a test, I know exactly what original code it pertains to, (if you can't work that out, then the test is too convoluted anyway).
It looks just like the interfaces naming convention, (I mean, you don't get confused with things starting with 'I', nor will you with 'T').
It's easy to just compile with or without the tests.
It's good in theory anyway, and works pretty well for small projects.

Related

test cases for unit testing

In my project I have seen that we have a mass of methods that test something. If you want to understand what goes on you should look throw all methods. When you have a class with 20 test methods it's challenging for you to find test case/cases in this mass of methods.
I have never seen using interfaces for defining test cases what you cover in you tests.
For example
puclic class A{
public SomeResult doSomething(Param param){
.....
}
..... some other methods
}
For this method there are 4 cases (for example);
check that method works as expected with null param
check that method throws runtime exception for some param's area
check that method returns expected result(normal case)
check something different
In our project for testing those cases , guys just create 4 method (they can be written on any order like 2 first cases present at the beginning of test class and the last second can be written at the end (200 lines of code below)). Also from the test's name is not always clear what test method checks.
Is it good way to describe the test cases in a interface in this way :
public interface ATestSpecification{
void doSomething_checkForNullParam();
void doSomething_checkExceptionForNotAllowedParam();
void doSomething_normalCase();
void doSomething_checkSomethingDifferent();
}
And the test class :
public class ATest implement ATestSpecification{
...
//implenent test cases , described in test specification
...
}
Since developer tests are essentially documentation and exist for the convenience of the developer(s) working on the code, I would recommend that you do away with that idea of creating interfaces for test methods--have never seen that before and am sorry to have seen it just now. The existence of those interfaces can only get in your way when you search the code for references to a method name or have your IDE display a call hierarchy on any method that you would want to find an example of how to use correctly. Don't put things in your own way.
In the case of tests, because they are documentation, I tend to diverge from the usual pattern for naming methods in Java. That is, I will abandon using camelCase in favor of all_lowercase_separated_by_underscores, which seems easier to read, generally. Thus I will have "should_do_something" or "ensure_whatever" so that the test case name helps me find what I might be looking for. Also, I would be less focused on testing methods and more focused on testing behavior--I know that sounds like splitting hairs, but that's the way I think of it. Figure out what the class needs to do and write those tests then implement using TDD. I usually don't feel the need to back-fill any tests if I use TDD or a close approximation thereof. Jimmy is completely correct about keeping your code focused and following SRP.
Hope that helps!
EDIT: naming conventions are always controversial--just pick one that works for you. it's come up here and here before.

Is it bad practice to unit test a method that is calling another method I am already testing?

Consider you have the following method:
public Foo ParseMe(string filepath)
{
// break up filename
// validate filename & extension
// retrieve info from file if it's a certain type
// some other general things you could do, etc
var myInfo = GetFooInfo(filename);
// create new object based on this data returned AND data in this method
}
Currently I have unit tests for GetFooInfo, but I think I also need to build unit tests for ParseMe. In a situation like this where you have a two methods that return two different properties - and a change in either of them could break something - should unit tests be created for both to determine the output is as expected?
I like to err on the side of caution and be more wary about things breaking and ensuring that maintenance later on down the road is easier, but I feel very skeptical about adding very similar tests in the test project. Would this be bad practice or is there any way to do this more efficiently?
I'm marking this as language agnostic, but just in case it matters I am using C# and NUnit - Also, I saw a post similar to this in title only, but the question is different. Sorry if this has already been asked.
ParseMe looks sufficiently non-trivial to require a unit test. To answer your precise question, if "you have a two methods that return two different properties - and a change in either of them could break something" you should absolutely unit test them.
Even if the bulk of the work is in GetFooInfo, at minimum you should test that it's actually called. I know nothing about NUnit, but I know in other frameworks (like RSpec) you can write tests like GetFooInfo.should be_called(:once).
It is not a bad practice to test a method that is calling another method. In fact, it is a good practice. If you have a method calling another method, it is probably performing additional functionality, which should be tested.
If you find yourself unit testing a method that calls a method that is also being unit tested, then you are probably experiencing code reuse, which is a good thing.
I agree with #tsm - absolutely test both methods (assuming both are public).
This may be a smell that the method or class is doing too much - violating the Single Responsibility Principle. Consider doing an Extract Class refactoring and decoupling the two classes (possibly with Dependency Injection). That way you could test both pieces of functionality independently. (That said, I'd only do that if the functionality was sufficiently complex to warrant it. It's a judgment call.)
Here's an example in C#:
public interface IFooFileInfoProvider
{
FooInfo GetFooInfo(string filename);
}
public class Parser
{
private readonly IFooFileInfoProvider _fooFileInfoProvider;
public Parser(IFooFileInfoProvider fooFileInfoProvider)
{
// Add a null check
_fooFileInfoProvider = fooFileInfoProvider;
}
public Foo ParseMe(string filepath)
{
string filename = Path.GetFileName(filepath);
var myInfo = _fooFileInfoProvider.GetFooInfo(filename);
return new Foo(myInfo);
}
}
public class FooFileInfoProvider : IFooFileInfoProvider
{
public FooInfo GetFooInfo(string filename)
{
// Do I/O
return new FooInfo(); // parameters...
}
}
Many developers, me included, take a programming by contract approach. That requires you to consider each method as a black box. If the method delegates to another method to accomplish its task does not matter, when you are testing the method. But you should also test all large or complicated parts of your program as units. So whether you need to unit test the GetFooInfo depends on how complicated that method is.

Many Test classes or one Test class with many methods?

I have a PersonDao that I'm writing unit tests against.
There are about 18-20 methods in PersonDao of the form -
getAllPersons()
getAllPersonsByCategory()
getAllPersonsUnder21() etc
My Approach to testing this was to create a PersonDaoTest with about 18 test methods testing each of the method in PersonDao
Then I created a PersonDaoPaginationTest that tested these 18 methods by applying pagination parameters.
Is this in anyway against the TDD best practices? I was told that this creates confusion and is against the best practices since this is non-standard. What was suggested is merging the two classes into PersonDaoTest instead.
As I understand is, the more broken down into many classes your code is, the better, please comment.
The fact that you have a set of 18 tests that you are going to have to duplicate to test a new feature is a smell that suggests that your PersonDao class is taking on multiple responsibilities. Specifically, it appears to be responsible both for querying/filter and for pagination. You may want to take a look at whether you can do a bit of design work to extract the pagination functionality into a separate class which could then be tested independently.
But in answer to your question, if you find that you have a class that you want to remain complex, then it's perfectly fine to use multiple test classes as a way of organizing a large number of tests. #Gishu's answer of grouping tests by their setup is a good approach. #Ryan's answer of grouping by "facets" or features is another good approach.
Can't give you a sweeping answer without looking at the code... except use whatever seems coherent to you and your team.
I've found that grouping tests based on their setup works out nicely in most cases. i.e if 5 tests require the same setup, they usually fit nicely into a test-fixture. if the 6th test requires a different setup (more or less) break it out into a separate test fixture.
This also leads to test-fixtures that are feature-cohesive (i.e. tests grouped on feature), give it a try. I'm not aware of any best practice that says you need to have one test class per production class... in practice I find I have n test classes per production classes, the best practice would be to use good names and keep related tests close (in a named folder).
My 2 cents: when you have a large class like that that has different "facets" to it, like pagination, I find it can often make for more understandable tests to not pack them all into one class. I can't claim to be a TDD guru, but I practice test-first development religiously, so to speak. I don't do it often, but it's not exactly rare, either, that I'll write more than a single test class for a particular class. Many people seem to forget good coding practices like separation of concerns when writing tests, though. I'm not sure why.
I think one test class per class is fine - if your implementation has many methods, then your test class will have many methods - big deal.
You may consider a couple of things however:
Your methods seem a bit "overly specific" and could use some abstraction or generalisation, for example instead of getAllPersonsUnder21() consider getAllPersonsUnder(int age)
If there are some more general aspects of your class, consider testing them using some common test code using call backs. For a trivial example to illustrate testing that both getAllPersons() returns multiple hits, do this:
#Test
public void testGetAllPersons() {
assertMultipleHits(new Callable<List<?>> () {
public List<?> call() throws Exception {
return myClass.getAllPersons(); // Your call back is here
}
});
}
public static void assertMultipleHits(Callable<List<?>> methodWrapper) throws Exception {
assertTrue("failure to get multiple items", methodWrapper.call().size() > 0);
}
This static method can be used by any class to test if "some method" returns multiple hits. You could extends this to do lots of tests over the same callback, for example running it with and without a DB connection up, testing that it behaves correctly in each case.
I'm working on test automation of a web app using selenium. It is not unit testing but you might find that some principles apply. Tests are very complex and we figured out that the only way to implement tests in a way that meets all our requirements was having 1 test per class. So we consider that each class is an individual test, then, we were able to use methods as the different steps of the test. For example:
public SignUpTest()
{
public SignUpTest(Map<String,Object> data){}
public void step_openSignUpPage(){}
public void step_fillForm(){}
public void step_submitForm(){}
public void step_verifySignUpWasSuccessfull(){}
}
All the steps are dependent, they follow the order specified and if someone fail the others will not be executed.
Of course, each step is a test by itself, but they all together form the sing up test.
The requirements were something like:
Tests must be data driven, this is, execute the same test in parallel with different inputs.
Tests must run in different browsers in parallel as well. So each
test will run "input_size x browsers_count" times in parallel.
Tests will focus in a web workflow, for example, "sign up with valid data" and they will be split into smaller tests units for each step of the workflow. It will make things easier to
maintain, and debug (when you have a failure, it will say:
SignUpTest.step_fillForm() and you'll know immediately what's wrong).
Tests steps share the same test input and state (for example, the id of the user created). Imagine if you put in the same class
steps of different tests, for example:
public SignUpTest()
{
public void signUpTest_step_openSignUpPage(){}
public void signUpTest_step_step_fillForm(){}
public void signUpTest_step_step_submitForm(){}
public void signUpTest_step_verifySignUpWasSuccessfull(){}
public void signUpNegativeTest_step_openSignUpPage(){}
public void signUpNegativeTest_step_step_fillFormWithInvalidData(){}
public void signUpNegativeTest_step_step_submitForm(){}
public void signUpNegativeTest_step_verifySignUpWasNotSuccessfull(){}
}
Then, having in the same class state belonging to the 2 tests will be
a mess.
I hope I was clear and you may find this useful. At the end, choosing what will represent your test: if a class or a method is just a decision that I think will depend int: what is the target of a test (in my case, a workflow around a feature), what's easier to implement and maintain, if a test fail how you make the failure more accurate and how you make it easier to debug, what will lead you to more readable code, etc.

Should I change the naming convention for my unit tests?

I currently use a simple convention for my unit tests. If I have a class named "EmployeeReader", I create a test class named "EmployeeReader.Tests. I then create all the tests for the class in the test class with names such as:
Reading_Valid_Employee_Data_Correctly_Generates_Employee_Object
Reading_Missing_Employee_Data_Throws_Invalid_Employee_ID_Exception
and so on.
I have recently been reading about a different type of naming convention used in BDD. I like the readability of this naming, to end up with a list of tests something like:
When_Reading_Valid_Employee (fixture)
Employee_Object_Is_Generated (method)
Employee_Has_Correct_ID (method)
When_Reading_Missing_Employee (fixture)
An_Invalid_Employee_ID_Exception_Is_Thrown (method)
and so on.
Has anybody used both styles of naming? Can you provide any advice, benefits, drawbacks, gotchas, etc. to help me decide whether to switch or not for my next project?
The naming convention I've been using is:
functionName_shouldDoThis_whenThisIsTheSituation
For example, these would be some test names for a stack's 'pop' function
pop_shouldThrowEmptyStackException_whenTheStackIsEmpty
pop_shouldReturnTheObjectOnTheTopOfTheStack_whenThereIsAnObjectOnTheStack
Your second example (having a fixture for each logical "task", rather than one for each class) has the advantage that you can have different SetUp and TearDown logic for each task, thus simplifying your individual test methods and making them more readable.
You don't need to settle on one or the other as a standard. We use a mixture of both, depending on how many different "tasks" we have to test for each class.
I feel the second is better because it makes your unit tests more readable to others as long lines make the code look more difficult to read or make it more difficult to skim through. If you still feel there's any ambiguity as for what the test does, you can add comments to clarify this.
Part of the reasoning behind the 2nd naming convention that you reference is that you are creating tests and behavioural specifications at the same time. You establish the context in which things are happening and what should actually then happen within that context. (In my experience, the observations/test-methods often start with "should_," so you get a standard "When_the_invoicing_system_is_told_to_email_the_client," "should_initiate_connection_to_mail_server" format.)
There are tools that will reflect over your test fixtures and output a nicely formatted html spec sheet, stripping out the underscores. You end up with human-readable documentation that is in sync with the actual code (as long as you keep your test coverage high and accurate).
Depending on the story/feature/subsystem on which you're working, these specifications can be shown to and understood by non-programmer stakeholders for verification and feedback, which is at the heart of agile and BDD in particular.
I use second method, and it really helps with describing what your software should do. I also use nested classes to describe more detailed context.
In essence, test classes are contexts, which can be nested, and methods are all one line assertions. For example,
public class MyClassSpecification
{
protected MyClass instance = new MyClass();
public class When_foobar_is_42 : MyClassSpecification
{
public When_foobar_is_42() {
this.instance.SetFoobar( 42 );
}
public class GetAnswer : When_foobar_is_42
{
private Int32 result;
public GetAnswer() {
this.result = this.GetAnswer();
}
public void should_return_42() {
Assert.AreEqual( 42, result );
}
}
}
}
which will give me following output in my test runner:
MyClassSpecification+When_foobar_is_42+GetAnswer
should_return_42
I've been down the two roads you describe in your question as well as a few other... Your first alternative is pretty straight forward and easy to understand for most people. I personally like the BDD style (your second example) more because it isolates different contexts and groups observations on those contexts. Th only real downside is that it generates more code so starting to do it feels slightly more cumbersome until you see the neat tests. Also if you use inheritance to reuse fixture setup you want a testrunner that outputs the inheritance chain. Consider a class "An_empty_stack" and you want to reuse it so you then do another class: "When_five_is_pushed_on : An_empty_stack" you want that as output and not just "When_five_is_pushed_on". If your testrunner does not support this your tests will contain redundant information like: "When_five_is_pushed_on_empty_stack : An_empty_stack" just to make the output nice.
i vote for calling the test case class: EmployeeReaderTestCase and calling the methods() like http://xunitpatterns.com/Organization.html and http://xunitpatterns.com/Organization.html#Test%20Naming%20Conventions

Are unit-test names important?

If unit-test names can become outdated over time and if you consider that the test itself is the most important thing, then is it important to choose wise test names?
ie
[Test]
public void ShouldValidateUserNameIsLessThan100Characters() {}
verse
[Test]
public void UserNameTestValidation1() {}
The name of any method should make it clear what it does.
IMO, your first suggestion is a bit long and the second one isn't informative enough. Also it's probably a bad idea to put "100" in the name, as that's very likely to change. What about:
public void validateUserNameLength()
If the test changes, the name should be updated accordingly.
Yes, the names are totally important, specially when you are running the tests in console or continuous integration servers. Jay Fields wrote a post about it.
Moreover, put good test names with one assertion per test and your suite will give you great reports when a test fails.
Very. Equally important as choosing good method and variable names.
Much more if your test suite is going to referred to by new devs in the future.
As for your original question, definitely Answer1. Typing in a few more characters is a small price to pay for
the readability. For you and others. It'll eliminate the 'what was I thinking here?' as well as 'WTF is this guy getting at in this test?'
Quick zoom in when you're in to fix something someone else wrote
instant update for any test-suite visitor. If done correctly, just going over the names of the test cases will inform the reader of the specs for the unit.
Yes.
[Test]
public void UsernameValidator_LessThanLengthLimit_ShouldValidate() {}
Put the test subject first, the test statement next, and the expected result last.
That way, you get a clear indication of what it is doing, and you can easily sort by name :)
In Clean Code, page 124, Robert C. Martin writes:
The moral of the story is simple: Test code is just as important as production code. It is not a second-class citizen. It requires thought, design, and care. It must be kept as clean as production code.
I think if one can not find a good concise name for a test method it's a sign that design of this test is incorrect. Also good method name helps you to find out what happened in less time.
Yes, the whole point of the test name is that it tells you what doesn't work when the test fails.
i wouldn't put conditions that test needs to meet in the name, because conditions may change in time. in your example, i'd recommend naming like
UserNameLengthValidate()
or
UserNameLengthTest()
or something similar to explain what the test does, but not presuming the testing/validation parameters.
Yes, the names of the code under test (methods, properties, whatever) can change, but I contend your existing tests should fail if the expectations change. That is the true value of having well-constructed tests, not perusing a list of test names. That being said, well named test methods are great tools for getting new developers on board, helping them locate "executable documentation" with which they can kick the tires of existing code -- so I would keep the names of test methods up to date just as I would keep the assertions made by the test methods up to date.
I name my test using the following pattern. Each test fixture attempts to focus on one class and is usually name {ClassUnderTest}Test. I name each test method {MemberUnderTest}_{Assertion}.
[TestFixture]
public class IndexableFileTest
{
[Test]
public void Connect_InitializesReadOnlyProperties()
{
// ...
}
[Test,ExpectedException(typeof(NotInitializedException))]
public void IsIndexable_ErrorWhenNotConnected()
{
// ...
}
[Test]
public void IsIndexable_True()
{
// ...
}
[Test]
public void IsIndexable_False()
{
// ...
}
}
Having a very descriptive name helps to instantly see what is not working correctly, so that you don't actually need to look at the unit test code.
Also, a list of all the unit tests describes the intended behavior of the unit, and can be used (more or less) as documentation to the behavior of the unit under test.
Note, this only works when unit tests are very specific and do not validate too much within one unit test.
So for example:
[Test]
void TestThatExceptionIsRaisedWhenStringLengthLargerThen100()
[Test]
void TestThatStringLengthOf99IsAccepted()
The name needs to matter within reason. I don't want an email from the build saying that test 389fb2b5-28ad3 failed, but just knowing that it was a UserName test as opposed to something else would help ensure the right person gets to do the diagnosis.
[RowTest]
[Row("GoodName")]
[Row("GoodName2")]
public void Should_validate_username()
{
}
[RowTest]
[Row("BadUserName")]
[Row("Bad%!Name")]
public void Should_invalidate_username()
{
}
This might make more sense for more complex types of validation really.
Yes, they are. I'd personally recommend looking at SSW's rules to better unit tests. It contains some very helpful naming guidelines.