This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
What is unit testing and how do you do it?
What is unit testing?
I recognize that to 95% of you, this is a very WTF question.
So. What's a unit test? I understand that essentially you're attempting to isolate atomic functionality but how do you test for that? When is it necessary? When is it ridiculous?
Can you give an example? (Preferably in C? I mostly hear about it from Java devs on this site so maybe this is specific to Object Oriented languages? I really don't know.)
I know many programmers swear by unit testing religiously. What's it all about?
EDIT: Also, what's the ratio of time you typically spend writing unit tests to time spent writing new code?
I'm Java now, before that C++, before that C. I am entirely convinced that every piece of work I have done, that I am now unashamed of, was enhanced by the testing strategies I picked. Skimping the testing hurts.
I'm sure that you test the code you write. What techniques do you use? For example, you might sit in a debugger and step through the code and watch what happens. You might execute the code against some test data someone gave you. You might devise particular inputs because you know that your code has some interesting behaviours for certain input values. Suppose your stuff uses someone else's stuff and that's not ready yet, you mock up their code so that your code can work with at least some fake answers
In all cases you may be to some degree Unit Testing. The last one is particulalry interesting - you are very much testing in isolation, testing your UNIT, even if theirs is not yet ready.
My opinion:
1). Tests that can easily be rerun are very useful - catch no end of late creeping defects.
In contrast testing sitting in a debugger is mind-numbing.
2). The activity of constructing interesting tests as you write your code, or BEFORE you write your code makes you focus on your fringe cases. Those annoying zero and null inputs, those "off by one errors". I perceive better code coming out as a result of good unit tests.
3). There is a cost of maintaining the tests. Generally it's worth it, but don't underestimate the efforrt of keeping them working.
4). There can be a tendency to over-ephasise Unit Tests. The really interesting bugs tend ot creep in when pieces are integrated. You replace that library you mocked with the real thing and Lo! It doesn't quite do what it said on the tin. ALso there is still a role for manual or exploartory testing. The insightful human tester finds special defects.
The most simple/non technical definition I can come up with automated way to test parts of code...
I use it and love it... but not religiously, One of my proudest moments in unit testing was an interest calculation that I did for a bank, extremely complicated and I only had one bug and there was no unit test for that case... as soon as I added the case and fixed my code, it was perfect.
So taking that example I had a class call InterestCalculation and it had properties for all of the arguments and a single public method Calculate() There where several steps to the calculation and if I where to try and write the whole thing in a single method and just check my result it would have been overwhelming to try to find where my bug/s where... So I took each step of the calculation and created a private method and a unit test/s for all of the different cases. (Some people will tell you to only test public methods, but in this scenario It worked better for me...) One example of the private methods was:
Method:
/// <summary>
///
/// </summary>
/// <param name="effectiveDate"></param>
/// <param name="lastCouponDate"></param>
/// <returns></returns>
private Int32 CalculateNumberDaysSinceLastCouponDate(DateTime effectiveDate, DateTime lastCouponDate)
{
Int32 result = 0;
if (lastCouponDate.Month == effectiveDate.Month)
{
result = this._Parameters.DayCount.GetDayOfMonth(effectiveDate) - lastCouponDate.Day;
}
else
{
result = this._Parameters.DayCount.GetNumberOfDaysInMonth(lastCouponDate)
- lastCouponDate.Day + effectiveDate.Day;
}
return result;
}
Test Methods:
Note: I would name them differently now, instead of numbers I would
basically put the summary into the
method name.
/// <summary>
///A test for CalculateNumberDaysSinceLastCouponDate
///</summary>
[TestMethod()]
[DeploymentItem("WATrust.CAPS.DataAccess.dll")]
public void CalculateNumberDaysSinceLastCouponDateTest1()
{
AccruedInterestCalculationMonthly_Accessor target = new AccruedInterestCalculationMonthly_Accessor();
target._Parameters = new AccruedInterestCalculationMonthlyParameters();
target._Parameters.DayCount = new DayCount(13);
DateTime effectiveDate = DateTime.Parse("04/22/2008");
DateTime lastCouponDate = DateTime.Parse("04/15/2008");
int expected = 7;
int actual;
actual = target.CalculateNumberDaysSinceLastCouponDate(effectiveDate, lastCouponDate);
Assert.AreEqual(expected, actual);
WriteToConsole(expected, actual);
}
/// <summary>
///A test for CalculateNumberDaysSinceLastCouponDate
///</summary>
[TestMethod()]
[DeploymentItem("WATrust.CAPS.DataAccess.dll")]
public void CalculateNumberDaysSinceLastCouponDateTest2()
{
AccruedInterestCalculationMonthly_Accessor target = new AccruedInterestCalculationMonthly_Accessor();
target._Parameters = new AccruedInterestCalculationMonthlyParameters();
target._Parameters.DayCount = new DayCount((Int32)
DayCount.DayCountTypes.ThirtyOverThreeSixty);
DateTime effectiveDate = DateTime.Parse("04/10/2008");
DateTime lastCouponDate = DateTime.Parse("03/15/2008");
int expected = 25;
int actual;
actual = target.CalculateNumberDaysSinceLastCouponDate(effectiveDate, lastCouponDate);
Assert.AreEqual(expected, actual);
WriteToConsole(expected, actual);
}
Where is it Ridiculous?
Well to each his own... The more you do it you will find where it is useful and where it seems to "be ridiculous" but personally, I don't use it to test my Database in the way most hardcore unit testers would... In the sense that I have a scripts to rebuild the database schema, repopulate the database with test data, etc. I usually write a unit test method to call my DataAccess method and label it with a Debug suffix like this: FindLoanNotes_Debug() and I've been putting System.Diagnostics.Debugger.Break() so if I run them in debug mode I can manually check my results.
Point by point:
1) What's a unit test?
A unit test is a software test designed to test one distinct unit of functionality of software.
2) I understand that essentially you're attempting to isolate atomic functionality but how do you test for that?
Unit tests are actually a good way to enforce certain design principles; one of the aspects of them is that they actually make a subtle but significant effect on the design of the code. Designing for test is an important thing; being able to test (or not) a certain bit of code can be very important; when unit tests are being used, designs tend to migrate toward the "more atomic" side of the spectrum.
3) When is it necessary?
There's a lot of varying opinion on this one. Some say it's always necessary, some say it's completely unnecessary. I'd contend that most developers with experience with Unit Testing would say that Unit Tests are necessary for any critical path code that has a design that is amenable to Unit Testing (I know it's a bit circular, but see #2 above).
When is it ridiculous? Can you give an example?
Generally, overtesting is where you get into the ridiculous end of the spectrum. For example, if you have a 3D Vector class that has accessors for each of the scalar components, having unit tests for each of the scalar accessors confirming the complete range of inputs and verifying the values for each of them would be considered to be a bit of overkill by some. On the other hand, it's important to note that even those situations can be useful to test.
I mostly hear about it from Java devs on this site so maybe this is specific to Object Oriented languages?
No, it's really applicable to any software. The Unit Test methodology came to maturity with a Java environment, but it's really applicable to any language or environment.
What's it all about?
Unit Testing is, at a very basic level, all about verifying and validating that the behaviors that are expected from a unit of code are ACTUALLY what the code does.
A unit test is another piece of software that you write which exercises your main code for acceptance of desired functionality.
I could write a calculator program which looks nice, has the buttons, looks like a TI-whatever calculator, and it could produce 2+2=5. Looks nice, but rather than send each iteration of some code to a human tester, with a long list of checks, I, the developer can run some automated, coded, unit tests on my code.
Basically, a unit test should be tested itself, by peers, or other careful review to answer "is this testing what I want it to?"
The unit test will have a set of "Givens", or "Inputs", and compare these to expected "Outputs".
There are, of course, different methodologies on how, when, and how much to use unit tests (check SO for some questions along these lines). However, in their most basic case, they are a program, or a loadable module of some other program, which makes assertions.
A standard grammar for a unit test might be to have a line of code which looks like this: Assert.AreEqual( a, b ).
The unit test method body might set up the inputs, and an actual output, and compare it to the expected output.
HelloWorldExample helloWorld = new HelloWorldExample();
string expected = "Hello World!";
string actual = helloWorld.GetString();
Assert.AreEqual( expected, actual );
If your unit test is written in the language of a particular framework (e.g. jUnit, NUnit, etc. ), the results of each method which is marked as part of a "test run" will be aggregated into a set of test results, such as a pretty graph of red dots for failure and green dots for successes, and/or an XML file, etc.
In response to your latest comments, "Theory" can provide some real world insight. TDD, Test Driven Development, says a lot about when and how often to use tests. On my latest project, we didn't adhere to TDD, but we sure used unit tests to verify that our code did what it was supposed to do.
Say you've chosen to implement the Car interface. The Car interface looks like this:
interface ICar
{
public void Accelerate( int delta );
public void Decelerate( int delta );
public int GetCurrentSpeed();
}
You choose to implement the Car interface in the class FordTaurus:
class FordTaurus : ICar
{
private int mySpeed;
public Accelerate( int delta )
{
mySpeed += delta;
}
public Decelerate( int delta )
{
mySpeed += delta;
}
public int GetCurrentSpeed()
{
return mySpeed;
}
}
You're assuming that to decelerate a FordTaurus, one must pass a negative value. However, suppose that you have a set of unit tests written against the Car interface, and they look like this:
public static void TestAcceleration( ICar car )
{
int oldSpeed = car.GetCurrentSpeed();
car.Accelerate( 5 );
int newSpeed = car.GetCurrentSpeed();
Assert.IsTrue( newSpeed > oldSpeed );
}
public static void TestDeceleration( ICar car )
{
int oldSpeed = car.GetCurrentSpeed();
car.Decelerate( 5 );
int newSpeed = car.GetCurrentSpeed();
Assert.IsTrue( newSpeed < oldSpeed );
}
The test tells you that maybe you've implemented the car interface incorrectly.
So you want examples? Last semester I took a compilers course. In it we had to write a register allocator. To put it in simple terms, my program can be summarized like this:
Input: A file written in ILOC, a pseudo-assembly language that was made up for my textbook. The instructions in the file have register names like "r<number>". The problem is the program uses as many registers as it needs, which is usually greater than the number of registers on the target machine.
Output: Another file written in ILOC. This time, the instructions are rewritten so that it uses the correct max number of registers that are allowed.
In order to write this program, I had to make a class that could parse an ILOC file. I wrote a bunch of tests for that class. Below are my tests (I actually had more, but got rid of them to help shorten this. I also added some comments to help you read it). I did the project in C++, so I used Google's C++ testing framework (googletest) located here.
Before showing you the code... let me say something about the basic structure. Essentially, there is a test class. You get to put a bunch of the general setup stuff in the test class. Then there are test macros called TEST_F's. The testing framework picks up on these and understands that they need to be run as tests. Each TEST_F has 2 arguments, the test class name, and the name of the test (which should be very descriptive... that way if the test fails, you know exactly what failed). You will see the structure of each test is similar: (1) set up some initial stuff, (2) run the method you are testing, (3) verify the output is correct. The way you check (3) is by using macros like EXPECT_*. EXPECT_EQ(expected, result) checks that result is equal to the expected. If it is not, you get a useful error message like "result was blah, but expected Blah".
Here is the code (I hope this isn't terribly confusing... it is certainly not a short or easy example, but if you take the time you should be able to follow and get the general flavor of how it works).
// Unit tests for the iloc_parser.{h, cc}
#include <fstream>
#include <iostream>
#include <gtest/gtest.h>
#include <sstream>
#include <string>
#include <vector>
#include "iloc_parser.h"
using namespace std;
namespace compilers {
// Here is my test class
class IlocParserTest : public testing::Test {
protected:
IlocParserTest() {}
virtual ~IlocParserTest() {}
virtual void SetUp() {
const testing::TestInfo* const test_info =
testing::UnitTest::GetInstance()->current_test_info();
test_name_ = test_info->name();
}
string test_name_;
};
// Here is a utility function to help me test
static void ReadFileAsString(const string& filename, string* output) {
ifstream in_file(filename.c_str());
stringstream result("");
string temp;
while (getline(in_file, temp)) {
result << temp << endl;
}
*output = result.str();
}
// All of these TEST_F things are macros that are part of the test framework I used.
// Just think of them as test functions. The argument is the name of the test class.
// The second one is the name of the test (A descriptive name so you know what it is
// testing).
TEST_F(IlocParserTest, ReplaceSingleInstanceOfSingleCharWithEmptyString) {
string to_replace = "blah,blah";
string to_find = ",";
string replace_with = "";
IlocParser::FindAndReplace(to_find, replace_with, &to_replace);
EXPECT_EQ("blahblah", to_replace);
}
TEST_F(IlocParserTest, ReplaceMultipleInstancesOfSingleCharWithEmptyString) {
string to_replace = "blah,blah,blah";
string to_find = ",";
string replace_with = "";
IlocParser::FindAndReplace(to_find, replace_with, &to_replace);
EXPECT_EQ("blahblahblah", to_replace);
}
TEST_F(IlocParserTest,
ReplaceMultipleInstancesOfMultipleCharsWithEmptyString) {
string to_replace = "blah=>blah=>blah";
string to_find = "=>";
string replace_with = "";
IlocParser::FindAndReplace(to_find, replace_with, &to_replace);
EXPECT_EQ("blahblahblah", to_replace);
}
// This test was suppsoed to strip out the "r" from register
// register names in the ILOC code.
TEST_F(IlocParserTest, StripIlocLineLoadI) {
string iloc_line = "loadI\t1028\t=> r11";
IlocParser::StripIlocLine(&iloc_line);
EXPECT_EQ("loadI\t1028\t 11", iloc_line);
}
// Here I make sure stripping the line works when it has a comment
TEST_F(IlocParserTest, StripIlocLineSubWithComment) {
string iloc_line = "sub\tr12, r10\t=> r13 // Subtract r10 from r12\n";
IlocParser::StripIlocLine(&iloc_line);
EXPECT_EQ("sub\t12 10\t 13 ", iloc_line);
}
// Here I make sure I can break a line up into the tokens I wanted.
TEST_F(IlocParserTest, TokenizeIlocLineNormalInstruction) {
string iloc_line = "sub\t12 10\t 13\n"; // already stripped
vector<string> tokens;
IlocParser::TokenizeIlocLine(iloc_line, &tokens);
EXPECT_EQ(4, tokens.size());
EXPECT_EQ("sub", tokens[0]);
EXPECT_EQ("12", tokens[1]);
EXPECT_EQ("10", tokens[2]);
EXPECT_EQ("13", tokens[3]);
}
// Here I make sure I can create an instruction from the tokens
TEST_F(IlocParserTest, CreateIlocInstructionLoadI) {
vector<string> tokens;
tokens.push_back("loadI");
tokens.push_back("1");
tokens.push_back("5");
IlocInstruction instruction(IlocInstruction::NONE);
EXPECT_TRUE(IlocParser::CreateIlocInstruction(tokens,
&instruction));
EXPECT_EQ(IlocInstruction::LOADI, instruction.op_code());
EXPECT_EQ(2, instruction.num_operands());
IlocInstruction::OperandList::const_iterator it = instruction.begin();
EXPECT_EQ(1, *it);
++it;
EXPECT_EQ(5, *it);
}
// Making sure the CreateIlocInstruction() method fails when it should.
TEST_F(IlocParserTest, CreateIlocInstructionFromMisspelledOp) {
vector<string> tokens;
tokens.push_back("ADD");
tokens.push_back("1");
tokens.push_back("5");
tokens.push_back("2");
IlocInstruction instruction(IlocInstruction::NONE);
EXPECT_FALSE(IlocParser::CreateIlocInstruction(tokens,
&instruction));
EXPECT_EQ(0, instruction.num_operands());
}
// Make sure creating an empty instruction works because there
// were times when I would actually have an empty tokens vector.
TEST_F(IlocParserTest, CreateIlocInstructionFromNoTokens) {
// Empty, which happens from a line that is a comment.
vector<string> tokens;
IlocInstruction instruction(IlocInstruction::NONE);
EXPECT_TRUE(IlocParser::CreateIlocInstruction(tokens,
&instruction));
EXPECT_EQ(IlocInstruction::NONE, instruction.op_code());
EXPECT_EQ(0, instruction.num_operands());
}
// This was a function that helped me generate actual code
// that I could output as a line in my output file.
TEST_F(IlocParserTest, MakeIlocLineFromInstructionAddI) {
IlocInstruction instruction(IlocInstruction::ADDI);
vector<int> operands;
operands.push_back(1);
operands.push_back(2);
operands.push_back(3);
instruction.CopyOperandsFrom(operands);
string output;
EXPECT_TRUE(IlocParser::MakeIlocLineFromInstruction(instruction, &output));
EXPECT_EQ("addI r1, 2 => r3", output);
}
// This test actually glued a bunch of stuff together. It actually
// read an input file (that was the name of the test) and parsed it
// I then checked that it parsed it correctly.
TEST_F(IlocParserTest, ParseIlocFileSimple) {
IlocParser parser;
vector<IlocInstruction*> lines;
EXPECT_TRUE(parser.ParseIlocFile(test_name_, &lines));
EXPECT_EQ(2, lines.size());
// Check first line
EXPECT_EQ(IlocInstruction::ADD, lines[0]->op_code());
EXPECT_EQ(3, lines[0]->num_operands());
IlocInstruction::OperandList::const_iterator operand = lines[0]->begin();
EXPECT_EQ(1, *operand);
++operand;
EXPECT_EQ(2, *operand);
++operand;
EXPECT_EQ(3, *operand);
// Check second line
EXPECT_EQ(IlocInstruction::LOADI, lines[1]->op_code());
EXPECT_EQ(2, lines[1]->num_operands());
operand = lines[1]->begin();
EXPECT_EQ(5, *operand);
++operand;
EXPECT_EQ(10, *operand);
// Deallocate memory
for (vector<IlocInstruction*>::iterator it = lines.begin();
it != lines.end();
++it) {
delete *it;
}
}
// This test made sure I generated an output file correctly.
// I built the file as an in memory representation, and then
// output it. I had a "golden file" that was supposed to represent
// the correct output. I compare my output to the golden file to
// make sure it was correct.
TEST_F(IlocParserTest, WriteIlocFileSimple) {
// Setup instructions
IlocInstruction instruction1(IlocInstruction::ADD);
vector<int> operands;
operands.push_back(1);
operands.push_back(2);
operands.push_back(3);
instruction1.CopyOperandsFrom(operands);
operands.clear();
IlocInstruction instruction2(IlocInstruction::LOADI);
operands.push_back(17);
operands.push_back(10);
instruction2.CopyOperandsFrom(operands);
operands.clear();
IlocInstruction instruction3(IlocInstruction::OUTPUT);
operands.push_back(1024);
instruction3.CopyOperandsFrom(operands);
// Propogate lines with the instructions
vector<IlocInstruction*> lines;
lines.push_back(&instruction1);
lines.push_back(&instruction2);
lines.push_back(&instruction3);
// Write out the file
string out_filename = test_name_ + "_output";
string golden_filename = test_name_ + "_golden";
IlocParser parser;
EXPECT_TRUE(parser.WriteIlocFile(out_filename, lines));
// Read back output file and verify contents are as expected.
string golden_file;
string out_file;
ReadFileAsString(golden_filename, &golden_file);
ReadFileAsString(out_filename, &out_file);
EXPECT_EQ(golden_file, out_file);
}
} // namespace compilers
int main(int argc, char** argv) {
// Boiler plate, test initialization
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
After all is said and done... WHY DID I DO THIS!? Well first of all. I wrote the tests incrementally as I prepared to write each piece of code. It helped give me peace of mind that the code I already wrote was working properly. It would have been insane to write all my code and then just try it out on a file and see what happened. There were so many layers, how could I know where a bug would come from unless I had each little piece tested in isolation?
BUT... MOST IMPORTANTLY!!! Testing is not really about catching initial bugs in your code... it's about protecting yourself from accidentally breaking your code. Every time I refactored or altered my IlocParser class, I was confident I didn't alter it in a bad way because I could run my tests (in a matter of seconds) and see that all the code is still working as expected. THAT is the great use of unit tests.
They seem like they take too much time... but ultimately, they save you time tracking down bugs because you changed some code and don't know what happened. They are a useful way of verifying that small pieces of code are doing what they are supposed to do, and correctly.
In computer programming, unit testing is a software verification and validation method in which a programmer tests that individual units of source code are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a class, which may belong to a base/super class, abstract class or derived/child class.
http://en.wikipedia.org/wiki/Unit_testing
For instance, if you have a matrix class, you might have a unit test checking that
Matrix A = Matrix(.....);
A.inverse()*A ==Matrix::Identity
Related
When people say "test only one thing". Does that mean that test one feature at a time or one scenario at a time?
method() {
//setup data
def data = new Data()
//send external webservice call
def success = service.webserviceCall(data)
//persist
if (success) {
data.save()
}
}
Based on the example, do we test by feature of the method:
testA() //test if service.webserviceCall is called properly, so assert if called once with the right parameter
testB() //test if service.webserviceCall succeeds, assert that it should save the data
testC() //test if service.webserviceCall fails, assert that it should not save the data
By scenario:
testA() //test if service.webserviceCall succeeds, so assert if service is called once with the right parameter, and assert that the data should be saved
testB() //test if service.webserviceCall fails, so again assert if service is called once with the right parameter, then assert that it should not save the data
I'm not sure if this is a subjective topic, but I'm trying to do the by feature approach. I got the idea from Roy Osherove's blogs, but I'm not sure if I understood it correct.
It was mentioned there that it would be easier to isolate the errors, but I'm not sure if its overkill. Complex methods will tend to have lots of tests.
(Please excuse my wording on the by feature/scenario, I'm not sure how to word them)
You are right in that this is a subjective topic.
Think about how you want this method to behave, not just on how it's currently implemented. Otherwise your tests will just mirror the production code and will break everytime the implementation changes.
Based on the limited context provided, I'd write the following (separate) tests:
Is the webservice command called with the expected data?
If the command returns successfully, is the data saved? Don't overspecify the arguments provided to your webservice call here, as the previous test covers this.
If it's important that the data is not saved when the command returns a failure, I'd write a third test for this. If it's not important, I wouldn't even bother.
You might have heard the adage "one assert per test". This is good advice in general because a test stops executing as soon as a single assert fails. All asserts further down are not executed. By splitting up the asserts in multiple tests you will receive more feedback when something goes wrong. When tests go red, you know exactly all the asserts that fail and don't have to run through the -fix assertion failure, run tests, fix next assertion failure, repeat- cycle.
So in the terminology you propose, my approach would also be to write a test per feature of the method.
Sidenote: you construct your data object in the method itself and call the save method of that object. How do you sense that the data is saved in your tests?
I understand it like this:
"unit test one thing" == "unit test one behavior"
(After all, it is the behavior that the client wants!)
I would suggest that you approach your testing "one feature at a time". I agree with you where you quoted that with this approach it is "easier to isolate the errors". Roy Osherove really does know what he is talking about especially when it comes to TDD.
In my experience I like to focus on the behaviors that I am trying to test (and I am not particularly referring to BDD here). Essentially I would test each behavior that I am expecting from this code. You said that you are mocking out the dependencies (webservice, and data storage) so I would still class this as a unit test with the following expected behaviors:
a call to this method will result in a particular call to a web service
a successful web service call will result in the data being saved
an unsuccessful web service call will result in the data not being saved
Having tests for these three behaviors will help you isolate any issues with the code immediately.
Your tests should also have no dependency on the actual code written to achieve the behavior. For example, if my implementation called some decorator internal to my class which in turn called the webservice correctly then that should be no concern of my test. My test should only be concerned with the external dependencies and public interface of the class itself.
If I exposed internal methods of my class (or implementation details, such as the decorator mentioned above) for the purposes of testing its particular implementation then I have created brittle tests that will fail when the implementation changes.
In summary, I would recommend that your tests should lock down the behavior of a class and isolate failures to identify the 'unit of behavior' that is failing.
A unit test in general is a test that is done without a call to database or file system or even to that effect doesnot call a webservice either. The idea of a unit test is that if you did not have any internet connection you should be able to unit test. So having said that , if a method calls a webservice or calls a database, then you basically are expected to mock the responses from an external system. You should be testing that unit of work only. As mentioned above by prgmtc on how you should be asserting one assert per method is the way to go.
Second, if you are calling a real webservice or database etc, then consider calling those test as integrated or integration test depending upon what you are trying to test.
In my opinion to get the most out of TDD you want to be doing test first development. Have a look at uncle Bobs 3 Rules of TDD.
If you follow these rules strictly, you end up writing tests that generally only have a single assert statements. In reality you will often find you end up with a number of assert statements that act as a single logical assert as it often helps with the understanding of the unit test itself.
Here is an example
[Test]
public void ValidateBankAccount_GivenInvalidAccountType_ShouldReturnValidationFailure()
{
//---------------Set up test pack-------------------
const string validBankAccount = "99999999999";
const string validBranchCode = "222222";
const string invalidAccountType = "99";
const string invalidAccoutTypeResult = "3";
var bankAccountValidation = Substitute.For<IBankAccountValidation>();
bankAccountValidation.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType)
.Returns(invalidAccoutTypeResult);
var service = new BankAccountCheckingService(bankAccountValidation);
//---------------Assert Precondition----------------
//---------------Execute Test ----------------------
var result = service.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType);
//---------------Test Result -----------------------
Assert.IsFalse(result.IsValid);
Assert.AreEqual("Invalid account type", result.Message);
}
And the ValidationResult class that is returned from the service
public interface IValidationResult
{
bool IsValid { get; }
string Message { get; }
}
public class ValidationResult : IValidationResult
{
public static IValidationResult Success()
{
return new ValidationResult(true,"");
}
public static IValidationResult Failure(string message)
{
return new ValidationResult(false, message);
}
public ValidationResult(bool isValid, string message)
{
Message = message;
IsValid = isValid;
}
public bool IsValid { get; private set; }
public string Message { get; private set; }
}
Note I would have unit tests the ValidationResult class itself, but in the test above I feel it gives more clarity to include both Asserts.
I've read tons of articles, seen tons of screencasts about TDD, but I'm still struggling with using it in real world project. My main issue is I don't know where to start, what test should be the first one.
Suppose I have to write client library calling external system's methods (e.g. notification).
I want this client to work as follows
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Response code = client.notifyOnEvent(Event.LIMIT_REACHED, 100); // some params of call
There is some translation and message format preparation behind the scenes, so I'd like to hide it from my client apps.
I don't know where and how to start.
Should I make up some rough classes set for this library?
Should I start with testing NotificationClient as below
public void testClientSendInvalidEventCommand() {
NotificationClient client = new NotificationClient(...);
Response code = client.notifyOnEvent(Event.WRONG_EVENT);
assertEquals(1223, code.codeValue());
}
If so, with such test I'm forced to write complete working implementation at once, with no baby steps as TDD states. I can mock out sosmething in Client but then I have to know this thing to be mocked upfront, so I need some upfront desing to be made.
Maybe I should start from the bottom, test this message formatting component first and then use it in right client test?
What way is the right one to go?
Should we always start from top (how to deal with this huge step required)?
Can we start with any class realizing tiny part of desired feature (as Formatter in this example)?
If I'd know where to hit with my tests it'd be a lot easier for me to proceed.
I'd start with this line:
NotificationClient client = new NotificationClient("abcd1234"); // client ID
Sounds like we need a NotificationClient, which needs a client ID. That's an easy thing to test for. My first test might look something like:
public void testNewClientAbcd1234HasClientId() {
NotificationClient client = new NotificationClient("abcd1234");
assertEquals("abcd1234", client.clientId());
}
Of course, it won't compile at first - not until I'd written a NotificationClient class with a constructor that takes a string parameter and a clientId() method that returns a string - but that's part of the TDD cycle.
public class NotificationClient {
public NotificationClient(string clientId) {
}
public string clientId() {
return "";
}
}
At this point, I can run my test and watch it fail (because I've hard-coded clientId()'s return to be an empty string). Once I've got my failing unit test, I write just enough production code (in NotificationClient) to get the test to pass:
public string clientId() {
return "abcd1234";
}
Now all my tests pass, so I can consider what to do next. The obvious (well, obvious to me) next step is to make sure that I can create clients whose ID isn't "abcd1234":
public void testNewClientBcde2345HasClientId() {
NotificationClient client = new NotificationClient("bcde2345");
assertEquals("bcde2345", client.clientId());
}
I run my test suite and observe that testNewClientBcde2345HasClientId() fails while testNewClientAbcd1234HasClientId() passes, and now I've got a good reason to add a member variable to NotificationClient:
public class NotificationClient {
private string _clientId;
public NotificationClient(string clientId) {
_clientId = clientId;
}
public string clientId() {
return _clientId;
}
}
Assuming no typographical errors have snuck in, that'll get all my tests to pass, and I can move on to whatever the next step is. (In your example, it would probably be testing that notifyOnEvent(Event.WRONG_EVENT) returns a Response whose codeValue() equals 1223.)
Does that help any?
Don't confuse acceptance tests that hook into each end of your application, and form an executable specifications with unit tests.
If you are doing 'pure' TDD you write an acceptance test which drives the unit tests that drive the implementation. testClientSendInvalidEventCommand is your acceptance test, but depending on how complicated things are you will delegate the implementation to multiple classes you can unit test separately.
How complicated things get before you have to split them up to test and understand them properly is why it is called Test Driven Design.
You can choose to let tests drive your design from the bottom up or from the top down. Both work well for different developers in different situations. Either approach will force to make some of those "upfront" design decisions but that's a good thing. Making those decisions in order to write your tests is test-driven design!
In your case you have an idea what the high level external interface to the system you are developing should be so let's start there. Write a test for how you think users of your notification client should interact with it and let it fail. This test is the basis for your acceptance or integration tests and they are going to continue failing until the features they describe are finished. That's ok.
Now step down one level. What are the steps which need to occur to provide that high level interface? Can we write an integration or unit test for those steps? Do they have dependencies you had not considered which might cause you to change the notification center interface you have started to define? Keep drilling down depth-first defining behavior with failing tests until you find that you have actually reached a unit test. Now implement enough to pass that unit test and continue. Get unit tests passing until you have built enough to pass an integration test and so on. You'll eventually have completed a depth-first construction of a tree of tests and should have a well tested feature whose design was driven by your tests.
One goal of TDD is that the testing informs the design. So the fact that you need to think about how to implement your NotificationClient is a good thing; it forces you to think of (hopefully) simple abstractions up front.
Also, TDD sort of assumes constant refactoring. Your first solution probably won't be the last; so as you refine your code the tests are there to tell you what breaks, from compile errors to actual runtime issues.
So I would just jump right in and start with the test you suggested. As you create mocks, you will need to create tests for the actual implementations of what you are mocking. You will find things make sense and need to be refactored, so you will need to modify your tests as you go. That's the way it's supposed to work...
I just wondered what the differences with unit testing and implementation testing are. I know unit testing is testing your modules/class/objects using defined inputs and checking the results against some defined outputs but what does implementation testing do and how do you do it? Also where does implementation testing fit in the development lifecycle?
"implementation testing" is not a common expression. I suspect that you meant "integration testing", since that is commonly used, especially in contrast to unit testing.
Integration testing means testing multiple parts or all of the system acting together. Often, the tests simulate an actual user working with the system through its regular UI.
The advantage is that you don't just test whether each component fulfils its contract, but also whether they are composed and configured correctly and interact as expected - things that you can't catch with unit tests. On the other hand, it's often hard to exhaustively test boundary conditions with integration tests, they're less stable and take much longer to execute. And of course they cannot be run (or even written) until most of the system is working.
Thus, integration tests happen much later in the development lifecycle than unit tests.
I've heard implementation testing used in two different contexts. First, it can be a test of the design. If you've got complex logic, you step through the logic before you hand it off to a coder - that way you don't waste time implementing something that you should have designed better. I've also heard it used as another term for V&V (validation and verification), where you make sure your implementation matches your requirements and that it meets the customer's vision.
Implementation is either PRE or POST.
In this case, implementation means "Putting live" I.e. - Into Production.
So pre-implementation testing means testing in pre-prod just before live.
Post-implementation testing means testing in live environment, once it has gone live.
I worked with Visual Studio Testing Tools, Testdriven.net and Excel, all of them together very good solution, I wrote this unit test
[TestMethod()]
public void viewFolderTest()
{
string Err = "";
connect_Excel("viewFolderTest");
DcDms actual;
DaDoc target = new DaDoc();
for (int i = 10; i < ds.Tables[0].Rows.Count; i++)
{
Err = "";
TestRow = ds.Tables[0].Rows[i]["Row"].ToString();
string expected = ds.Tables[0].Rows[i]["expected"].ToString();
string ParentId = ds.Tables[0].Rows[i]["ParentId"].ToString();
actual = target.viewFolder(ParentId);
try
{
Assert.AreEqual(expected,actual.Tables[DcDms.Dms_vrFileFolder].Rows.Count.ToString());
}
catch (System.Exception ex)
{
Err = ex.Message;
if (Err.Length >= 254)
{
Err = Err.Substring(0, 255);
}
Update_Excel("viewFolderTest", "ERROR", Err, "Row", TestRow);
}
Update_Excel("viewFolderTest", "actual", actual.Tables[DcDms.Dms_vrFileFolder].Rows.Count.ToString(), "Row", TestRow);
if (Err == "")
{
Update_Excel("viewFolderTest", "ERROR", "Pass", "Row", TestRow);
}
}
}
I have a "best practices" question. I'm writing a test for a certain method, but there are multiple entry values. Should I write one test for each entry value or should I change the entryValues variable value, and call the .assert() method (doing it for all range of possible values)?
Thank you for your help.
Best regards,
Pedro Magueija
edited: I'm using .NET. Visual Studio 2010 with VB.
If one is having to write many tests which vary only in initial input and final output one should use a data driven test. This allows you to define the test once along with a mapping between input and output. The unit testing framework will then interpret it as being one test per case. How to actually do this depends on which framework you are using.
It's better to have separate unit tests for each input/output sets covering the full spectrum of possible values for the method you are trying to test (or at least for those input/output sets that you want to unit test).
Smaller tests are easier to read.
The name is part of the documentation of the test.
Separate methods give a more precise indication of what has failed.
So if you have a single method like:
void testAll() {
// setup1
assert()
// setup2
assert()
// setup3
assert()
}
In my experience this gets very big very quickly, and so becomes hard to read and understand, so I would do:
void testDivideByZero() {
// setup
assert()
}
void testUnderflow() {
// setup
assert()
}
void testOverflow() {
// setup
assert()
}
Should I write one test for each entry
value or should I change the
entryValues variable value, and call
the .assert() method (doing it for all
range of possible values)?
If you have one code path typically you do not test all possible inputs. What you usually want to test are "interesting" inputs that make good exemplars of the data you will get.
For example if I have a function
define add_one(num) {
return num+1;
}
I can't write a test for all possible values so I may use MAX_NEGATIVE_INT, -1, 0, 1, MAX_POSITIVE_INT as my test set because they are a good representatives of interesting values I might get.
You should have at least one input for every code path. If you have a function where every value corresponds to a unique code path then I would consider writing a tests for the complete range of possible values. And example of this would be a command parser.
define execute(directive) {
if (directive == 'quit') { exit; }
elsif (directive == 'help') { print help; }
elsif (directive == 'connect') { intialize_connection(); }
else { warn("unknown directive"); }
}
For the purpose of clarity I used elifs rather than a dispatch table. I think this make it's clear that each unique value that comes in has a different behavior and therefore you would need to test every possible value.
Are you talking about this difference?
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
}
- (void) testSomething2
{
[foo callBarWithValue:y];
assert…
}
vs.
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
[foo callBarWithValue:y];
assert…
}
The first version is better in that when a test fails, you’ll have better idea what does not work. The second version is obviously more convenient. Sometimes I even stuff the test values into a collection to save work. I usually choose the first approach when I might want to debug just that single case separately. And of course, I only choose the latter when the test values really belong together and form a coherent unit.
you have two options really, you don't mention which test framework or language you are using so one may not be applicable.
1) if your test framework supports it use a RowTest, MBUnit and Nunit support this if you're using .NET this would allow you to put multiple attributes on your method and each line would be executed as a separate test
2) If not write a test per condition and ensure you give it a meaningful name so that if (when) the test fails you can find the problem easily and it means something to you.
EDIT
Its called TestCase in Nunit Nunit TestCase Explination
I have a little JUnit-Test that export an Object to the FileSystem. In the first place my test looked like this
public void exportTest {
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportList(list, "output.xml");
}
Usually my test contain a assertion like assertEquals(...). So I changed the code to the following
public void exportCustomerListTest() throws Exception {
// delete the old resulting file, so we can test for a new one at the end
File file = new File("output.xml");
file.delete();
//...creating a list with some objects to export...
JAXBService service = new JAXBService();
service.exportCustomers(list, "output.xml");
// Test if a file has been created and if it contains some bytes
FileReader fis = new FileReader("output.xml");
int firstByte = fis.read();
assertTrue(firstByte != -1 );
}
Do I need this, or was the first approach enough? I am asking because, the first one is actually just "testing" that the code runs, but not testing any results. Or can I rely on the "contract" that if the export-method runs without an exception the test passes?
Thanks
Well, you're testing that your code runs to completion without any exceptions - but you're not testing anything about the output.
Why not keep a file with the expected output, and compare that with the actual output? Note that this would probably be easier if you had an overload of expertCustomers which took a Writer - then you could pass in a StringWriter and only write to memory. You could test that in several ways, with just a single test of the overload which takes a filename, as that would just create a FileOutputStream wrapped in an OutputStreamWriter and then call the more thoroughly tested method. You'd really just need to check that the right file existed, probably.
you could use
assertTrue(new File("output.xml")).exist());
if you notice problems during the generation of the file, you can unit test the generation process (and not the fact that the file was correctly written and reloaded from the filesystem)
You can either go with the "gold file" approach (testing that two files are 1 to 1 identical) or test various outputs of your generator (I imagine that the generation of the content is separated from the saving into the file)
I agree with the other posts. I will also add that your first test won't tell a test suite or test runner that this particular test has failed.
Sometimes a test only needs to demonstrate that no exceptions were thrown. In that case relying that an exception will fail the test is good enough. There is certainly nothing gained in JUnit by calling the assertEquals method. A test passes when it doesn't throw an AssertionException, not because that method is called. Consider a method that allows null input, you might write a test like this:
#Test public void testNullAllowed() {
new CustomObject().methodThatAllowsNull(null);
}
That might be enough of a test right there (leave to a separate test or perhaps there is nothing interesting to test about what it does with a null value), although it prudent to leave a comment that you didn't forget the assert, you left it out on purpose.
In your case, however, you haven't tested very much. Sure it didn't blow up, but an empty method wouldn't blow up either. Your second test is better, at least you demonstrate a non-empty file was created. But you can do better than that and check that at least some reasonable result was created.