I have a "best practices" question. I'm writing a test for a certain method, but there are multiple entry values. Should I write one test for each entry value or should I change the entryValues variable value, and call the .assert() method (doing it for all range of possible values)?
Thank you for your help.
Best regards,
Pedro Magueija
edited: I'm using .NET. Visual Studio 2010 with VB.
If one is having to write many tests which vary only in initial input and final output one should use a data driven test. This allows you to define the test once along with a mapping between input and output. The unit testing framework will then interpret it as being one test per case. How to actually do this depends on which framework you are using.
It's better to have separate unit tests for each input/output sets covering the full spectrum of possible values for the method you are trying to test (or at least for those input/output sets that you want to unit test).
Smaller tests are easier to read.
The name is part of the documentation of the test.
Separate methods give a more precise indication of what has failed.
So if you have a single method like:
void testAll() {
// setup1
assert()
// setup2
assert()
// setup3
assert()
}
In my experience this gets very big very quickly, and so becomes hard to read and understand, so I would do:
void testDivideByZero() {
// setup
assert()
}
void testUnderflow() {
// setup
assert()
}
void testOverflow() {
// setup
assert()
}
Should I write one test for each entry
value or should I change the
entryValues variable value, and call
the .assert() method (doing it for all
range of possible values)?
If you have one code path typically you do not test all possible inputs. What you usually want to test are "interesting" inputs that make good exemplars of the data you will get.
For example if I have a function
define add_one(num) {
return num+1;
}
I can't write a test for all possible values so I may use MAX_NEGATIVE_INT, -1, 0, 1, MAX_POSITIVE_INT as my test set because they are a good representatives of interesting values I might get.
You should have at least one input for every code path. If you have a function where every value corresponds to a unique code path then I would consider writing a tests for the complete range of possible values. And example of this would be a command parser.
define execute(directive) {
if (directive == 'quit') { exit; }
elsif (directive == 'help') { print help; }
elsif (directive == 'connect') { intialize_connection(); }
else { warn("unknown directive"); }
}
For the purpose of clarity I used elifs rather than a dispatch table. I think this make it's clear that each unique value that comes in has a different behavior and therefore you would need to test every possible value.
Are you talking about this difference?
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
}
- (void) testSomething2
{
[foo callBarWithValue:y];
assert…
}
vs.
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
[foo callBarWithValue:y];
assert…
}
The first version is better in that when a test fails, you’ll have better idea what does not work. The second version is obviously more convenient. Sometimes I even stuff the test values into a collection to save work. I usually choose the first approach when I might want to debug just that single case separately. And of course, I only choose the latter when the test values really belong together and form a coherent unit.
you have two options really, you don't mention which test framework or language you are using so one may not be applicable.
1) if your test framework supports it use a RowTest, MBUnit and Nunit support this if you're using .NET this would allow you to put multiple attributes on your method and each line would be executed as a separate test
2) If not write a test per condition and ensure you give it a meaningful name so that if (when) the test fails you can find the problem easily and it means something to you.
EDIT
Its called TestCase in Nunit Nunit TestCase Explination
Related
[EDIT]: Click here for the question on the appropriate site.
What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be seperated into it's own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement.
For example, I am writing a point class that has a function WillCollideWith(LineSegment):
public class Point {
// Point data and constructor ...
public bool CollidesWithLine(LineSegment lineSegment) {
Vector PointEndOfMovement = new Vector(Position.X + Velocity.X,
Position.Y + Velocity.Y);
LineSegment pointPath = new LineSegment(Position, PointEndOfMovement);
if (lineSegment.Intersects(pointPath)) return true;
return false;
}
}
I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle.
Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?
If you follow TDD to the letter as per how Kent Beck defines it in his book, when you come across something that you will also need to test, make a note of it on a piece of paper (he refers to this as a test list) and then focus on the current test. Kent suggests you should work on one test at a time.
From a test first perspective you should focus on making the test pass, which has several options:
Write the implementation of Intersects inline in the current method. "Green" means working, not pretty. Once working, refactor both the code AND tests.
Stub it out. Pass in a test double (mock) into the method that can simulate the contract.
Fake it. When you come across a method you need, make a note for other tests, then write a basic implementation (eg "return true")
I suggest your best option is to mock it, that way you stay in your workflow and you also test a limited amount of code at a time.
I like to use [Ignore] attribute to mark tests which require attention (e.g. when it is not completed). Such tests will not run. Ignored tests are highlighted in test-runners (usually yellow or orange). Even if all other tests are passed, you will not see green line while there any ignored tests. This insures that tests will not be forgotten.
The below code is my Method logic .
pulic String getData()
{
if (leg.length == 4)
return "ATE";
else if (leg.length == 2) {
return "FTE";
}
if (leg.length == 3) {
if (xr1.KotilaType.equals("CST")) {
return "HTE";
}
return "DTE";
}
return "BTE";
}
As ordered to me, i need to start writing Junit Test Cases for this Method.
As you can see there is only One method and a number of conditions inside this method .
Now my question is ,
Do we need to write a Seperate Junit TestCase ( A Seperate Method) for each condition ??
Or will it be sufficient to write only One Junit TestCase and cover all the above conditions ??
I am new to junit and not a expert , please guide me what is the approiate way for writing JunitTestCases for the above method ??
This is question of style. These are the two alternatives: With all in one method:
public void testAll() {
assertEquals("ATE", getData(leg4));
assertEquals("FTE", getData(leg2));
// etc.
}
With one test method / combination:
public void testLength4() {
assertEquals("ATE", getData(leg4));
}
public void testLength2() {
assertEquals("FTE", getData(leg2));
}
The former has the advantage that it is very easy to see all combinations, because they're all in one place. This is very useful if there are a lot of combinations or the combinations are complex.
The latter has the advantage that you have one test per combination, so it's very easy to localize a problem, because (theoretically) only one test will fail. Disadvantage: you very quickly run out of names for test methods. testLength4, testLength2, testLength3WithCST, testLength3WithoutCST. This disadvantage gets particularly bad when you have a lot of combinations (say like one test for each month in a year).
Most of the time, I use one test method / test case, but if the number of combinations starts to get high, I would have a single method. This contains all of the combinations, because I like to be able to see all of my combinations at once, to see if I've missed any.
I'd start with five unit tests for each possible return value, setting the object under test's state (leg.length & xr1.KotilaType) to appropriate values in each test.
Then i'd move to tests that covered boundary values e.g. leg.length == 5, 1 etc.
Let's say you're writing a function to check if a page was reached by the appropriate URL. The page has a "canonical" stub - for example, while a page could be reached at stackoverflow.com/questions/123, we would prefer (for SEO reasons) to redirect it to stackoverflow.com/questions/123/how-do-i-move-the-turtle-in-logo - and the actual redirect is safely contained in its own method (eg. redirectPage($url)), but how do you properly test the function which calls it?
For example, take the following function:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
If you were to unit test the checkStub() function, wouldn't the redirect get in the way?
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing. My mind immediately thinks of routers and controllers as having these sorts of problems, as testing them necessarily leads to the generation of pages rather than being confined to just their own function.
Do I just fail at unit testing?
You say...
This is part of a larger problem where certain functions seem to get too big and leave the realm of unit testing and into the world of integration testing
I think this is why unit testing is (1) hard and (2) leads to code that doesn't crumble under its own weight. You have to be meticulous about breaking all of your dependencies or you end up with unit tests == integration tests.
In your example, you would inject a redirector as a dependency. You use a mock, double or spy. Then you do the tests as #atk lays out. Sometimes it's not worth it. More often it forces you to write better code. And it's hard to do without an IOC container.
This is an old question, but I think this answer is relevant. #Rob states that you would inject a redirector as a dependency - and sure, this works. However, your problem is that you don't have a good separation of concerns.
You need to make your functions as atomic as possible, and then compose larger functionality using the granular functions you've created. You wrote this:
function checkStub($questionId, $baseUrl, $stub) {
canonicalStub = model->getStub($questionId);
if ($stub != $canonicalStub) {
redirectPage($baseUrl . $canonicalStub);
}
}
I'd write this:
function checkStubEquality($stub1, $stub2) {
return $stub1 == $stub2;
}
canonicalStub = model->getStub($questionId);
if (!checkStubEquality(canonicalStub, $stub)) redirectPage($baseUrl . $canonicalStub);
It sounds like you just have another test case. You need to check that the stub is identified correctly as a stub with both positive and negative testing, and you need to check that the page to which you are redirected is correct.
Or do I totally misunderstand the question?
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}
I have a little JUnit-Test that export an Object to the FileSystem. In the first place my test looked like this
public void exportTest {
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportList(list, "output.xml");
}
Usually my test contain a assertion like assertEquals(...). So I changed the code to the following
public void exportCustomerListTest() throws Exception {
// delete the old resulting file, so we can test for a new one at the end
File file = new File("output.xml");
file.delete();
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportCustomers(list, "output.xml");
// Test if a file has been created and if it contains some bytes
FileReader fis = new FileReader("output.xml");
int firstByte = fis.read();
assertTrue(firstByte != -1 );
}
Do I need this, or was the first approach enough? I am asking because, the first one is actually just "testing" that the code runs, but not testing any results. Or can I rely on the "contract" that if the export-method runs without an exception the test passes?
Thanks
Well, you're testing that your code runs to completion without any exceptions - but you're not testing anything about the output.
Why not keep a file with the expected output, and compare that with the actual output? Note that this would probably be easier if you had an overload of expertCustomers which took a Writer - then you could pass in a StringWriter and only write to memory. You could test that in several ways, with just a single test of the overload which takes a filename, as that would just create a FileOutputStream wrapped in an OutputStreamWriter and then call the more thoroughly tested method. You'd really just need to check that the right file existed, probably.
you could use
assertTrue(new File("output.xml")).exist());
if you notice problems during the generation of the file, you can unit test the generation process (and not the fact that the file was correctly written and reloaded from the filesystem)
You can either go with the "gold file" approach (testing that two files are 1 to 1 identical) or test various outputs of your generator (I imagine that the generation of the content is separated from the saving into the file)
I agree with the other posts. I will also add that your first test won't tell a test suite or test runner that this particular test has failed.
Sometimes a test only needs to demonstrate that no exceptions were thrown. In that case relying that an exception will fail the test is good enough. There is certainly nothing gained in JUnit by calling the assertEquals method. A test passes when it doesn't throw an AssertionException, not because that method is called. Consider a method that allows null input, you might write a test like this:
#Test public void testNullAllowed() {
new CustomObject().methodThatAllowsNull(null);
}
That might be enough of a test right there (leave to a separate test or perhaps there is nothing interesting to test about what it does with a null value), although it prudent to leave a comment that you didn't forget the assert, you left it out on purpose.
In your case, however, you haven't tested very much. Sure it didn't blow up, but an empty method wouldn't blow up either. Your second test is better, at least you demonstrate a non-empty file was created. But you can do better than that and check that at least some reasonable result was created.