test cases for unit testing - unit-testing

In my project I have seen that we have a mass of methods that test something. If you want to understand what goes on you should look throw all methods. When you have a class with 20 test methods it's challenging for you to find test case/cases in this mass of methods.
I have never seen using interfaces for defining test cases what you cover in you tests.
For example
puclic class A{
public SomeResult doSomething(Param param){
.....
}
..... some other methods
}
For this method there are 4 cases (for example);
check that method works as expected with null param
check that method throws runtime exception for some param's area
check that method returns expected result(normal case)
check something different
In our project for testing those cases , guys just create 4 method (they can be written on any order like 2 first cases present at the beginning of test class and the last second can be written at the end (200 lines of code below)). Also from the test's name is not always clear what test method checks.
Is it good way to describe the test cases in a interface in this way :
public interface ATestSpecification{
void doSomething_checkForNullParam();
void doSomething_checkExceptionForNotAllowedParam();
void doSomething_normalCase();
void doSomething_checkSomethingDifferent();
}
And the test class :
public class ATest implement ATestSpecification{
...
//implenent test cases , described in test specification
...
}

Since developer tests are essentially documentation and exist for the convenience of the developer(s) working on the code, I would recommend that you do away with that idea of creating interfaces for test methods--have never seen that before and am sorry to have seen it just now. The existence of those interfaces can only get in your way when you search the code for references to a method name or have your IDE display a call hierarchy on any method that you would want to find an example of how to use correctly. Don't put things in your own way.
In the case of tests, because they are documentation, I tend to diverge from the usual pattern for naming methods in Java. That is, I will abandon using camelCase in favor of all_lowercase_separated_by_underscores, which seems easier to read, generally. Thus I will have "should_do_something" or "ensure_whatever" so that the test case name helps me find what I might be looking for. Also, I would be less focused on testing methods and more focused on testing behavior--I know that sounds like splitting hairs, but that's the way I think of it. Figure out what the class needs to do and write those tests then implement using TDD. I usually don't feel the need to back-fill any tests if I use TDD or a close approximation thereof. Jimmy is completely correct about keeping your code focused and following SRP.
Hope that helps!
EDIT: naming conventions are always controversial--just pick one that works for you. it's come up here and here before.

Related

How should I unit test functions with many subfunctions?

I'm looking to better understand I should test functions that have many substeps or subfunctions.
Let's say I have the functions
// Modify the state of class somehow
public void DoSomething(){
DoSomethingA();
DoSomethingB();
DoSomethingC();
}
Every function here is public. Each subfunction has 2 paths. So to test every path for DoSomething() I'd have 2*2*2 = 8 tests. By writing 8 tests for DoSomething() I will have indirectly tested the subfunctions too.
So should I be testing like this, or instead write unit tests for each of the subfunctions and then only write 1 test case that measures the final state of the class after DoSomething() and ignore all the possible paths? A total of 2+2+2+1 = 7 tests. But is it bad then that the DoSomething() test case will depend on the other unit test cases to have complete coverage?
There appears to be a very prevalent religious belief that testing should be unit testing. While I do not intend to underestimate the usefulness of unit testing, I would like to point out that it is just one possible flavor of testing, and its extensive (or even exclusive) use is indicative of people (or environments) that are somewhat insecure about what they are doing.
In my experience knowledge of the inner workings of a system is useful as a hint for testing, but not as an instrument for testing. Therefore, black box testing is far more useful in most cases, though that's admittedly in part because I do not happen to be insecure about what I am doing. (And that is in turn because I use assertions extensively, so essentially all of my code is constantly testing itself.)
Without knowing the specifics of your case, I would say that in general, the fact that DoSomething() works by invoking DoSomethingA() and then DoSomethingB() and then DoSomethingC() is an implementation detail that your black-box test should best be unaware of. So, I would definitely not test that DoSomething() invokes DoSomethingA(), DoSomethingB(), and DoSomethingC(), I would only test to make sure that it returns the right results, and using the knowledge that it does in fact invoke those three functions as a hint I would implement precisely those 7 tests that you were planning to use.
On the other hand, it should be noted that if DoSomethingA() and DoSomethingB() and DoSomethingC() are also public functions, then you should also test them individually, too.
Definitely test every subfunction seperately (because they're public).
It would help you find the problem if one pops up.
If DoSomething only uses other functions, I wouldn't bother writing additional tests for it. If it has some other logic, I would test it, but assume all functions inside work properly (if they're in a different class, mock them).
The point is finding what the function does that is not covered in other tests and testing that.
Indirect testing should be avoided. You should write unit tests for each function explicitly. After that You should mock submethods and test your main function. For example :
You have a method which inserts a user to DB and method is like this :
void InsertUser(User user){
var exists = SomeExternal.UserExists(user);
if(exists)
throw new Exception("bla bla bla");
//Insert codes here
}
If you want to test InsertUser function, you should mock external/sub/nested methods and test behaviour of InsertUser function.
This example creates two tests: 1 - "When user exists then Should throw Exception" 2 - "When user does not exist then Should insert user"

Writing tests before writing code

As far as I understand TDD and BDD cycle is something like:
Start by writing tests
See them fail
Write code
Pass the tests
Repeat
The question is how do you write tests before you have any code? Should I create some kind of class skeletons or interfaces? Or have I misunderstood something?
You have the essence of it, but I would change one part of your description. You don't write tests before you write code - you write a test before you write code. Then - before writing any more tests - you write just enough code to get your test to pass. When it's passing, you look for opportunities to improve the code, and make the improvements while keeping your tests passing - and then you write your second test. The point is, you're focusing on one tiny bit of functionality at any given time. What is the next thing you want your program to do? Write a test for that, and nothing more. Get that test passing. Clean the code. What's the next thing you want it to do? Iterate until you're happy.
The thing is, if you write tests before writing code, you don't have that focus. It's one test at a time.
Yes, that is correct. If you check out Michael Hartl's book on Ruby on Rails (free for HTML viewing), you will see how he does this specifically. So to add on to what lared said, let's say your first job is to add a new button to a web page. Your process would look like this:
Write a test to look for the button on the page visually.
Verify that the test fails (there should not be a button present, therefore it should fail).
Write code to place button on the page.
Verify test passes.
TDD will save your bacon when you accidentally do something to your code that breaks an old test. For example, you change the button to a link accidentally. The test will fail and alert you to the problem.
If you are using a real programming language, (you know, with a compiler and all,) then yes, of course you have to write class skeletons or interfaces, otherwise your tests will not even compile.
If you are using a scripting language, then you do not even have to write skeletons or interfaces, because your test script will happily begin to run and will fail on the first non-existent class or method that it encounters.
The question is how do you write tests before you have any code? Should I create some kind of class skeletons or interfaces? Or have I misunderstood something?
To expand on a point that lared made in his comment:
Then you write tests, which fail because the classes/whatever doesn't exist, and then you write the minimal amount of code which makes them pass
One thing to remember with TDD is that the test you are writing is the first client of your code. Therefore I wouldn't worry about not having the classes or interfaces defined already - because as he pointed out, simply by writing code referencing classes that don't exist, you will get your first "Red" in the cycle - namely, your code won't compile! That's a perfectly valid test.
TDD can also mean Test Driven Design
Once you embrace this idea, you will find that writing the test first serves less as a simple "is this code correct" and more of a "is this code right" guideline, so you'll find that you actually end up producing production code that is not only correct, but well structured as well.
Now a video showing this process would be super, but I don't have one but I'll make a stab at an example. Note this is a super simple example and ignores up-front pencil and paper planning / real world requirements from business, which will often be the driving force behind your design process.
Anyway suppose we want to create a simple Person object that can store a person's name and age. And we'd like to do this via TDD so we know it's correct.
So we think about it for a minute, and write our first test (note: example using pseudo C# / pseudo test framework)
public void GivenANewPerson_TheirNameAndAgeShouldBeAsExpected()
{
var sut = new Person();
Assert.Empty(sut.Name);
Assert.Zero(sut.Age);
}
Straight away we have a failing test, this won't compile because the Person class doesn't exist. So you use your IDE to auto-create the class for you:
public class Person
{
public int Age {get;set;}
public string Name {get;set;}
}
OK, now you have a first passing test. But now as you look at that class, you realise that there is nothing to ensure a person's age is always positive (>0). Let's assert that this is the case:
public void GivenANegativeAgeValue_PersonWillRejectIt()
{
var sut = new Person();
Assert.CausesException(sut.Age = -100);
}
Well, that test fails so let's fix up the class:
public class Person
{
protected int age;
public int Age
{
get{return age;}
set{
if(value<=0)
{
throw new InvalidOperationException("Age must be a positive number");
}
age=value;
}
}
public string Name {get;set;}
}
But now you might say to yourself - OK, since I know that a person's age can never be <=0, why do I even bother creating a writable property - do I always want to have to write two statements, one to create a Person and another to set their Age? What if I forgot to do it in one part of my code? What if I created a Person in one part of my code, and then later on I tried to assign a variable that was negative to Age later on, in another module? Surely, Age must be an invariant of Person so let's fix this up:
public class Person
{
public Person(int age){
if (age<=0){
throw new InvalidOperationException("Age must be a positive number");
}
this.Age = age;
}
public int Age {get;protected set;}
public string Name {get;set;}
}
And of course you have to fix your tests because they are won't compile any more - and if fact now you realise that the second test is redundant and can be dropped!
public void GivenANewPerson_TheirNameAndAgeShouldBeAsExpected()
{
var sut = new Person(42);
Assert.Empty(sut.Name);
Assert.42(sut.Age);
}
And you will then probably go through a similar process with Name, and so on.
Now I know this seems like a terribly long-winded way of creating a class, but consider that you have basically designed this class from scratch with built-in defences against invalid state - for example you will never ever have to debug code like this:
//A Person instance, 6,000 lines and 3 modules away from where it was instantiated
john.Age = x; //Crash because x is -42
or
//A Person instance, reserialised from a message queue in another process
var someValue = 2015/john.Age; //DivideByZeroException because we forgot to assign john's age
For me, this is one of the key benefits of TDD, using it not only as a testing tool but as a design tool that makes you think about the production code you are implementing, and forcing you to consider how the classes you create could end up in invalid, application killing states, and how to guard against this, and helping you to write objects that are easy to use and don't require their consumers to understand how they work, but rather what they do.
Since any modern IDE worth it's salt will provide you with the opportunity to create missing classes / interfaces with a couple of keystrokes or mouse clicks, I believe it's well worth trying this approach.
TDD and BDD are different things that share a common mechanism. This shared mechanism that is that you write something that 'tests' something before you write the thing that does something. You then use the failures to guide/drive the development.
(
You write the tests by thinking about the problem you are trying to solve, and fleshing out the details by pretending that you have an ideal solution that you can test. You write your test to use your ideal solution. Doing this does all sorts of things like:
Discover names of things you need for your solution
Uncover interfaces for your things to make them easy to use
Experience failures with your things
...
A difference between BDD and TDD is that BDD is much more focused on the 'what' and the 'why', rather than the 'how'. BDD is very concerned about the appropriate use of langauge to describe things. BDD starts at a higher level of abstraction. When you get to areas where the detail overwhelms language then TDD is used as a tool to implement the detail.
This idea that you can choose to think of things and write about them at different levels of abstraction is key.
You write the 'tests' you need by choosing:
the appropriate langauage for your problem
the appropriate level of abstraction to explain your problem simply and clearly
an appropriate mechanism to call your functionality.

Many Test classes or one Test class with many methods?

I have a PersonDao that I'm writing unit tests against.
There are about 18-20 methods in PersonDao of the form -
getAllPersons()
getAllPersonsByCategory()
getAllPersonsUnder21() etc
My Approach to testing this was to create a PersonDaoTest with about 18 test methods testing each of the method in PersonDao
Then I created a PersonDaoPaginationTest that tested these 18 methods by applying pagination parameters.
Is this in anyway against the TDD best practices? I was told that this creates confusion and is against the best practices since this is non-standard. What was suggested is merging the two classes into PersonDaoTest instead.
As I understand is, the more broken down into many classes your code is, the better, please comment.
The fact that you have a set of 18 tests that you are going to have to duplicate to test a new feature is a smell that suggests that your PersonDao class is taking on multiple responsibilities. Specifically, it appears to be responsible both for querying/filter and for pagination. You may want to take a look at whether you can do a bit of design work to extract the pagination functionality into a separate class which could then be tested independently.
But in answer to your question, if you find that you have a class that you want to remain complex, then it's perfectly fine to use multiple test classes as a way of organizing a large number of tests. #Gishu's answer of grouping tests by their setup is a good approach. #Ryan's answer of grouping by "facets" or features is another good approach.
Can't give you a sweeping answer without looking at the code... except use whatever seems coherent to you and your team.
I've found that grouping tests based on their setup works out nicely in most cases. i.e if 5 tests require the same setup, they usually fit nicely into a test-fixture. if the 6th test requires a different setup (more or less) break it out into a separate test fixture.
This also leads to test-fixtures that are feature-cohesive (i.e. tests grouped on feature), give it a try. I'm not aware of any best practice that says you need to have one test class per production class... in practice I find I have n test classes per production classes, the best practice would be to use good names and keep related tests close (in a named folder).
My 2 cents: when you have a large class like that that has different "facets" to it, like pagination, I find it can often make for more understandable tests to not pack them all into one class. I can't claim to be a TDD guru, but I practice test-first development religiously, so to speak. I don't do it often, but it's not exactly rare, either, that I'll write more than a single test class for a particular class. Many people seem to forget good coding practices like separation of concerns when writing tests, though. I'm not sure why.
I think one test class per class is fine - if your implementation has many methods, then your test class will have many methods - big deal.
You may consider a couple of things however:
Your methods seem a bit "overly specific" and could use some abstraction or generalisation, for example instead of getAllPersonsUnder21() consider getAllPersonsUnder(int age)
If there are some more general aspects of your class, consider testing them using some common test code using call backs. For a trivial example to illustrate testing that both getAllPersons() returns multiple hits, do this:
#Test
public void testGetAllPersons() {
assertMultipleHits(new Callable<List<?>> () {
public List<?> call() throws Exception {
return myClass.getAllPersons(); // Your call back is here
}
});
}
public static void assertMultipleHits(Callable<List<?>> methodWrapper) throws Exception {
assertTrue("failure to get multiple items", methodWrapper.call().size() > 0);
}
This static method can be used by any class to test if "some method" returns multiple hits. You could extends this to do lots of tests over the same callback, for example running it with and without a DB connection up, testing that it behaves correctly in each case.
I'm working on test automation of a web app using selenium. It is not unit testing but you might find that some principles apply. Tests are very complex and we figured out that the only way to implement tests in a way that meets all our requirements was having 1 test per class. So we consider that each class is an individual test, then, we were able to use methods as the different steps of the test. For example:
public SignUpTest()
{
public SignUpTest(Map<String,Object> data){}
public void step_openSignUpPage(){}
public void step_fillForm(){}
public void step_submitForm(){}
public void step_verifySignUpWasSuccessfull(){}
}
All the steps are dependent, they follow the order specified and if someone fail the others will not be executed.
Of course, each step is a test by itself, but they all together form the sing up test.
The requirements were something like:
Tests must be data driven, this is, execute the same test in parallel with different inputs.
Tests must run in different browsers in parallel as well. So each
test will run "input_size x browsers_count" times in parallel.
Tests will focus in a web workflow, for example, "sign up with valid data" and they will be split into smaller tests units for each step of the workflow. It will make things easier to
maintain, and debug (when you have a failure, it will say:
SignUpTest.step_fillForm() and you'll know immediately what's wrong).
Tests steps share the same test input and state (for example, the id of the user created). Imagine if you put in the same class
steps of different tests, for example:
public SignUpTest()
{
public void signUpTest_step_openSignUpPage(){}
public void signUpTest_step_step_fillForm(){}
public void signUpTest_step_step_submitForm(){}
public void signUpTest_step_verifySignUpWasSuccessfull(){}
public void signUpNegativeTest_step_openSignUpPage(){}
public void signUpNegativeTest_step_step_fillFormWithInvalidData(){}
public void signUpNegativeTest_step_step_submitForm(){}
public void signUpNegativeTest_step_verifySignUpWasNotSuccessfull(){}
}
Then, having in the same class state belonging to the 2 tests will be
a mess.
I hope I was clear and you may find this useful. At the end, choosing what will represent your test: if a class or a method is just a decision that I think will depend int: what is the target of a test (in my case, a workflow around a feature), what's easier to implement and maintain, if a test fail how you make the failure more accurate and how you make it easier to debug, what will lead you to more readable code, etc.

Should I change the naming convention for my unit tests?

I currently use a simple convention for my unit tests. If I have a class named "EmployeeReader", I create a test class named "EmployeeReader.Tests. I then create all the tests for the class in the test class with names such as:
Reading_Valid_Employee_Data_Correctly_Generates_Employee_Object
Reading_Missing_Employee_Data_Throws_Invalid_Employee_ID_Exception
and so on.
I have recently been reading about a different type of naming convention used in BDD. I like the readability of this naming, to end up with a list of tests something like:
When_Reading_Valid_Employee (fixture)
Employee_Object_Is_Generated (method)
Employee_Has_Correct_ID (method)
When_Reading_Missing_Employee (fixture)
An_Invalid_Employee_ID_Exception_Is_Thrown (method)
and so on.
Has anybody used both styles of naming? Can you provide any advice, benefits, drawbacks, gotchas, etc. to help me decide whether to switch or not for my next project?
The naming convention I've been using is:
functionName_shouldDoThis_whenThisIsTheSituation
For example, these would be some test names for a stack's 'pop' function
pop_shouldThrowEmptyStackException_whenTheStackIsEmpty
pop_shouldReturnTheObjectOnTheTopOfTheStack_whenThereIsAnObjectOnTheStack
Your second example (having a fixture for each logical "task", rather than one for each class) has the advantage that you can have different SetUp and TearDown logic for each task, thus simplifying your individual test methods and making them more readable.
You don't need to settle on one or the other as a standard. We use a mixture of both, depending on how many different "tasks" we have to test for each class.
I feel the second is better because it makes your unit tests more readable to others as long lines make the code look more difficult to read or make it more difficult to skim through. If you still feel there's any ambiguity as for what the test does, you can add comments to clarify this.
Part of the reasoning behind the 2nd naming convention that you reference is that you are creating tests and behavioural specifications at the same time. You establish the context in which things are happening and what should actually then happen within that context. (In my experience, the observations/test-methods often start with "should_," so you get a standard "When_the_invoicing_system_is_told_to_email_the_client," "should_initiate_connection_to_mail_server" format.)
There are tools that will reflect over your test fixtures and output a nicely formatted html spec sheet, stripping out the underscores. You end up with human-readable documentation that is in sync with the actual code (as long as you keep your test coverage high and accurate).
Depending on the story/feature/subsystem on which you're working, these specifications can be shown to and understood by non-programmer stakeholders for verification and feedback, which is at the heart of agile and BDD in particular.
I use second method, and it really helps with describing what your software should do. I also use nested classes to describe more detailed context.
In essence, test classes are contexts, which can be nested, and methods are all one line assertions. For example,
public class MyClassSpecification
{
protected MyClass instance = new MyClass();
public class When_foobar_is_42 : MyClassSpecification
{
public When_foobar_is_42() {
this.instance.SetFoobar( 42 );
}
public class GetAnswer : When_foobar_is_42
{
private Int32 result;
public GetAnswer() {
this.result = this.GetAnswer();
}
public void should_return_42() {
Assert.AreEqual( 42, result );
}
}
}
}
which will give me following output in my test runner:
MyClassSpecification+When_foobar_is_42+GetAnswer
should_return_42
I've been down the two roads you describe in your question as well as a few other... Your first alternative is pretty straight forward and easy to understand for most people. I personally like the BDD style (your second example) more because it isolates different contexts and groups observations on those contexts. Th only real downside is that it generates more code so starting to do it feels slightly more cumbersome until you see the neat tests. Also if you use inheritance to reuse fixture setup you want a testrunner that outputs the inheritance chain. Consider a class "An_empty_stack" and you want to reuse it so you then do another class: "When_five_is_pushed_on : An_empty_stack" you want that as output and not just "When_five_is_pushed_on". If your testrunner does not support this your tests will contain redundant information like: "When_five_is_pushed_on_empty_stack : An_empty_stack" just to make the output nice.
i vote for calling the test case class: EmployeeReaderTestCase and calling the methods() like http://xunitpatterns.com/Organization.html and http://xunitpatterns.com/Organization.html#Test%20Naming%20Conventions

How to test function call order

Considering such code:
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
UPD: I'm trying to check not only the state of an objects, but also the execution order to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.
However, you don't generally test that one method makes some calls to other methods on the same class... why would you?
Generally, when you're testing a class, you only care about testing its publicly visible state. If you test
anything else, your tests will prevent you from refactoring later.
I could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).
If you're interested in performance, I recommend that you write a test that measures performance.
Check the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.
The problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the actual requirement instead of testing the implementation detail that meets that requirement.
That said, if you really want to test that your methods are called in a certain order, you'll need to do the following:
Move them to another class, call it Collaborator
Add an instance of this other class to the ToBeTested class
Use a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class
Call the method under test
Use your mocking framework to assert that the methods were called on your mock in the correct order.
I'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.
You could check out mockpp.
Instead of trying to figure out how many functions were called, and in what order, find a set of inputs that can only produce an expected output if you call things in the right order.
Some mocking frameworks allow you to set up ordered expectations, which lets you say exactly which function calls you expect in a certain order. For example, RhinoMocks for C# allows this.
I am not a C++ coder so I'm not aware of what's available for C++, but that's one type of tool that might allow what you're trying to do.
http://msdn.microsoft.com/en-au/magazine/cc301356.aspx
This is a good article about Context Bound Objects. It contains some so advanced stuff, but if you are not lazy and really want to understand this kind of things it will be really helpful.
At the end you will be able to write something like:
[CallTracingAttribute()]
public class TraceMe : ContextBoundObject
{...}
You could use ACE (or similar) debug frameworks, and in your test, configure the debug object to stream to a file. Then you just need to check the file.