Create JUnit unit test methods and cover during code coverage - unit-testing

I am new into writing test methods using JUnit. Can someone help me or atleast give me sample of how can I write test method for this method which will cover it all during code coverage?
Here is my method, wherein device can have single or multiple (comma separated) values, e.g. Apple or Apple,Samsung,Xiaomi
public List<String> getDeviceList(String device) {
if (StringUtils.hasText(device)) {
List <String> deviceList = Stream.of(device.split("[, ]")).collect(Collectors.toList());
return deviceList ;
}
return null;
}
Thank you!

Related

how to run specific xunit test cases depending upon the app.config key value

We have to use an app.config key in our unit test cases, but the value of that key is supposed to be different in the test cases. So depending on that values how to run specific cases? If one unit test case runs then other test cases should not run how to manage this programmatically in c#?
Thank you
One approach you can follow is to wrap the use of your configurations in an interface.
public interface IConfigurationWrapper
{
string GetConfigA();
string GetConfigB();
}
public class ConfigurationWrapper : IConfigurationWrapper
{
public ConfigurationWrapper ()
{
return ConfigurationManager.AppSettings["countoffiles"]
}
}
And then on your tests mock it with NSubsititute or any other mocking library:
var configurations = Substitute.For<IConfigurationWrapper>();
configurations.GetConfigA().Returns("Config A for testing!")
Them you can use this mock(s) to run parameterized tests with xUnit.
This project of mine, has an implementation of this wrapper pattern for testing.

How can I unit test a method with database access?

I have already had some difficulties on unit tests and I am trying to learn it while using a small project I currently am working on and I ran into this problem these two problems that I hope you can help me with
1- My project is an MVC project. At which level should my unit tests start? They should focus only on the business layer? Should they also test actions on my controllers?
2- I have a method that verifies an username format and then access the DB to check if it is available for use. The return is a boolean whether this username is available or not.
Would one create a unit test for such a method?
I would be interested on testing the format verification, but how would I check them without querying the DB? Also, if the formats are correct, but the username is already in use, I will get a false value, but the validation worked. I could decouple this method, but the DB verification should only happen if the format is correct, so they should somehow be tied.
How would someone with unit tests knowledge solve this issue. Or how would someone refactor this method to be able to test it?
I could create a stub for the DB access, but how would I attach it to my project on the user testing but detach it when running locally?
Thanks!
In your specific case, one easy thing you could do is decompose your verification method into 3 different methods: one to check formatting, one to check DB availability, and one to tie them both together. This would allow you to test each of the sub-functions in isolation.
In more complex scenarios, other techniques may be useful. In essence, this is where dependency injection and inversion of control come in handy (unfortunately, those phrases mean different things to different people, but getting the basic ideas is usually a good start).
Your goal should be to decouple the concept of "Check if this username is available" from the implementation of checking the DB for it.
So, instead of this:
public class Validation
{
public bool CheckUsername(string username)
{
bool isFormatValid = IsFormatValid(username);
return isFormatValid && DB.CheckUsernameAvailability(username);
}
}
You could do something like this:
public class Validation
{
public bool CheckUsername(string username,
IUsernameAvailabilityChecker checker)
{
bool isFormatValid = IsFormatValid(username);
return isFormatValid && checker.CheckUsernameAvailability(username);
}
}
And then, from your unit test code, you can create a custom IUsernameAvailabilityChecker which does whatever you want for testing purposes. On the other hand, the actual production code can use a different implementation of IUsernameAvailabilityChecker to actually query the database.
Keep in mind that there are many, many techniques to solve this kind of testing problem, and the examples I gave are simple and contrived.
Testing against outside services can be done using mocking. If you've done a good job using interfaces it's very easy to mock various parts of your application. These mocks can be injected into your unit and used like it would normally handle it.
You should start unit testing as soon as possible. If your application is not complete or code needed for testing is absent you can still test against some interface you can mock.
On a sidenote: Unit testing is about testing behavior and is not an effective way to find bugs. You will find bugs with testing but it should not be your goal.
For instance:
interface UserService {
public void setUserRepository(UserRepository userRepository);
public boolean isUsernameAvailable(String username);
}
class MyUserService implements UserService {
private UserRepository userRepository;
public vois setUserRepository(UserRepository userRepository) {
this.userRepository = userRepository;
}
public boolean isUsernameAvailable(String username) {
return userRepository.checkUsernameAvailability(username);
}
}
interface UserRepository {
public boolean checkUsernameAvailability(String username);
}
// The mock used for testing
class MockUserRepository {
public boolean checkUsernameAvailability(String username) {
if ("john".equals(username)) {
return false;
}
return true;
}
}
class MyUnitTest {
public void testIfUserNotAvailable() {
UserService service = new UserService();
service.setUserRepository(new MockUserRepository);
assertFalse(service.isUsernameAvailable('john')); // yep, it's in use
}
public void testIfUserAvailable() {
UserService service = new UserService();
service.setUserRepository(new MockUserRepository);
assertTrue(service.isUsernameAvailable('mary')); // yep, is available
}
}

What's a unit test? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
What is unit testing and how do you do it?
What is unit testing?
I recognize that to 95% of you, this is a very WTF question.
So. What's a unit test? I understand that essentially you're attempting to isolate atomic functionality but how do you test for that? When is it necessary? When is it ridiculous?
Can you give an example? (Preferably in C? I mostly hear about it from Java devs on this site so maybe this is specific to Object Oriented languages? I really don't know.)
I know many programmers swear by unit testing religiously. What's it all about?
EDIT: Also, what's the ratio of time you typically spend writing unit tests to time spent writing new code?
I'm Java now, before that C++, before that C. I am entirely convinced that every piece of work I have done, that I am now unashamed of, was enhanced by the testing strategies I picked. Skimping the testing hurts.
I'm sure that you test the code you write. What techniques do you use? For example, you might sit in a debugger and step through the code and watch what happens. You might execute the code against some test data someone gave you. You might devise particular inputs because you know that your code has some interesting behaviours for certain input values. Suppose your stuff uses someone else's stuff and that's not ready yet, you mock up their code so that your code can work with at least some fake answers
In all cases you may be to some degree Unit Testing. The last one is particulalry interesting - you are very much testing in isolation, testing your UNIT, even if theirs is not yet ready.
My opinion:
1). Tests that can easily be rerun are very useful - catch no end of late creeping defects.
In contrast testing sitting in a debugger is mind-numbing.
2). The activity of constructing interesting tests as you write your code, or BEFORE you write your code makes you focus on your fringe cases. Those annoying zero and null inputs, those "off by one errors". I perceive better code coming out as a result of good unit tests.
3). There is a cost of maintaining the tests. Generally it's worth it, but don't underestimate the efforrt of keeping them working.
4). There can be a tendency to over-ephasise Unit Tests. The really interesting bugs tend ot creep in when pieces are integrated. You replace that library you mocked with the real thing and Lo! It doesn't quite do what it said on the tin. ALso there is still a role for manual or exploartory testing. The insightful human tester finds special defects.
The most simple/non technical definition I can come up with automated way to test parts of code...
I use it and love it... but not religiously, One of my proudest moments in unit testing was an interest calculation that I did for a bank, extremely complicated and I only had one bug and there was no unit test for that case... as soon as I added the case and fixed my code, it was perfect.
So taking that example I had a class call InterestCalculation and it had properties for all of the arguments and a single public method Calculate() There where several steps to the calculation and if I where to try and write the whole thing in a single method and just check my result it would have been overwhelming to try to find where my bug/s where... So I took each step of the calculation and created a private method and a unit test/s for all of the different cases. (Some people will tell you to only test public methods, but in this scenario It worked better for me...) One example of the private methods was:
Method:
/// <summary>
///
/// </summary>
/// <param name="effectiveDate"></param>
/// <param name="lastCouponDate"></param>
/// <returns></returns>
private Int32 CalculateNumberDaysSinceLastCouponDate(DateTime effectiveDate, DateTime lastCouponDate)
{
Int32 result = 0;
if (lastCouponDate.Month == effectiveDate.Month)
{
result = this._Parameters.DayCount.GetDayOfMonth(effectiveDate) - lastCouponDate.Day;
}
else
{
result = this._Parameters.DayCount.GetNumberOfDaysInMonth(lastCouponDate)
- lastCouponDate.Day + effectiveDate.Day;
}
return result;
}
Test Methods:
Note: I would name them differently now, instead of numbers I would
basically put the summary into the
method name.
/// <summary>
///A test for CalculateNumberDaysSinceLastCouponDate
///</summary>
[TestMethod()]
[DeploymentItem("WATrust.CAPS.DataAccess.dll")]
public void CalculateNumberDaysSinceLastCouponDateTest1()
{
AccruedInterestCalculationMonthly_Accessor target = new AccruedInterestCalculationMonthly_Accessor();
target._Parameters = new AccruedInterestCalculationMonthlyParameters();
target._Parameters.DayCount = new DayCount(13);
DateTime effectiveDate = DateTime.Parse("04/22/2008");
DateTime lastCouponDate = DateTime.Parse("04/15/2008");
int expected = 7;
int actual;
actual = target.CalculateNumberDaysSinceLastCouponDate(effectiveDate, lastCouponDate);
Assert.AreEqual(expected, actual);
WriteToConsole(expected, actual);
}
/// <summary>
///A test for CalculateNumberDaysSinceLastCouponDate
///</summary>
[TestMethod()]
[DeploymentItem("WATrust.CAPS.DataAccess.dll")]
public void CalculateNumberDaysSinceLastCouponDateTest2()
{
AccruedInterestCalculationMonthly_Accessor target = new AccruedInterestCalculationMonthly_Accessor();
target._Parameters = new AccruedInterestCalculationMonthlyParameters();
target._Parameters.DayCount = new DayCount((Int32)
DayCount.DayCountTypes.ThirtyOverThreeSixty);
DateTime effectiveDate = DateTime.Parse("04/10/2008");
DateTime lastCouponDate = DateTime.Parse("03/15/2008");
int expected = 25;
int actual;
actual = target.CalculateNumberDaysSinceLastCouponDate(effectiveDate, lastCouponDate);
Assert.AreEqual(expected, actual);
WriteToConsole(expected, actual);
}
Where is it Ridiculous?
Well to each his own... The more you do it you will find where it is useful and where it seems to "be ridiculous" but personally, I don't use it to test my Database in the way most hardcore unit testers would... In the sense that I have a scripts to rebuild the database schema, repopulate the database with test data, etc. I usually write a unit test method to call my DataAccess method and label it with a Debug suffix like this: FindLoanNotes_Debug() and I've been putting System.Diagnostics.Debugger.Break() so if I run them in debug mode I can manually check my results.
Point by point:
1) What's a unit test?
A unit test is a software test designed to test one distinct unit of functionality of software.
2) I understand that essentially you're attempting to isolate atomic functionality but how do you test for that?
Unit tests are actually a good way to enforce certain design principles; one of the aspects of them is that they actually make a subtle but significant effect on the design of the code. Designing for test is an important thing; being able to test (or not) a certain bit of code can be very important; when unit tests are being used, designs tend to migrate toward the "more atomic" side of the spectrum.
3) When is it necessary?
There's a lot of varying opinion on this one. Some say it's always necessary, some say it's completely unnecessary. I'd contend that most developers with experience with Unit Testing would say that Unit Tests are necessary for any critical path code that has a design that is amenable to Unit Testing (I know it's a bit circular, but see #2 above).
When is it ridiculous? Can you give an example?
Generally, overtesting is where you get into the ridiculous end of the spectrum. For example, if you have a 3D Vector class that has accessors for each of the scalar components, having unit tests for each of the scalar accessors confirming the complete range of inputs and verifying the values for each of them would be considered to be a bit of overkill by some. On the other hand, it's important to note that even those situations can be useful to test.
I mostly hear about it from Java devs on this site so maybe this is specific to Object Oriented languages?
No, it's really applicable to any software. The Unit Test methodology came to maturity with a Java environment, but it's really applicable to any language or environment.
What's it all about?
Unit Testing is, at a very basic level, all about verifying and validating that the behaviors that are expected from a unit of code are ACTUALLY what the code does.
A unit test is another piece of software that you write which exercises your main code for acceptance of desired functionality.
I could write a calculator program which looks nice, has the buttons, looks like a TI-whatever calculator, and it could produce 2+2=5. Looks nice, but rather than send each iteration of some code to a human tester, with a long list of checks, I, the developer can run some automated, coded, unit tests on my code.
Basically, a unit test should be tested itself, by peers, or other careful review to answer "is this testing what I want it to?"
The unit test will have a set of "Givens", or "Inputs", and compare these to expected "Outputs".
There are, of course, different methodologies on how, when, and how much to use unit tests (check SO for some questions along these lines). However, in their most basic case, they are a program, or a loadable module of some other program, which makes assertions.
A standard grammar for a unit test might be to have a line of code which looks like this: Assert.AreEqual( a, b ).
The unit test method body might set up the inputs, and an actual output, and compare it to the expected output.
HelloWorldExample helloWorld = new HelloWorldExample();
string expected = "Hello World!";
string actual = helloWorld.GetString();
Assert.AreEqual( expected, actual );
If your unit test is written in the language of a particular framework (e.g. jUnit, NUnit, etc. ), the results of each method which is marked as part of a "test run" will be aggregated into a set of test results, such as a pretty graph of red dots for failure and green dots for successes, and/or an XML file, etc.
In response to your latest comments, "Theory" can provide some real world insight. TDD, Test Driven Development, says a lot about when and how often to use tests. On my latest project, we didn't adhere to TDD, but we sure used unit tests to verify that our code did what it was supposed to do.
Say you've chosen to implement the Car interface. The Car interface looks like this:
interface ICar
{
public void Accelerate( int delta );
public void Decelerate( int delta );
public int GetCurrentSpeed();
}
You choose to implement the Car interface in the class FordTaurus:
class FordTaurus : ICar
{
private int mySpeed;
public Accelerate( int delta )
{
mySpeed += delta;
}
public Decelerate( int delta )
{
mySpeed += delta;
}
public int GetCurrentSpeed()
{
return mySpeed;
}
}
You're assuming that to decelerate a FordTaurus, one must pass a negative value. However, suppose that you have a set of unit tests written against the Car interface, and they look like this:
public static void TestAcceleration( ICar car )
{
int oldSpeed = car.GetCurrentSpeed();
car.Accelerate( 5 );
int newSpeed = car.GetCurrentSpeed();
Assert.IsTrue( newSpeed > oldSpeed );
}
public static void TestDeceleration( ICar car )
{
int oldSpeed = car.GetCurrentSpeed();
car.Decelerate( 5 );
int newSpeed = car.GetCurrentSpeed();
Assert.IsTrue( newSpeed < oldSpeed );
}
The test tells you that maybe you've implemented the car interface incorrectly.
So you want examples? Last semester I took a compilers course. In it we had to write a register allocator. To put it in simple terms, my program can be summarized like this:
Input: A file written in ILOC, a pseudo-assembly language that was made up for my textbook. The instructions in the file have register names like "r<number>". The problem is the program uses as many registers as it needs, which is usually greater than the number of registers on the target machine.
Output: Another file written in ILOC. This time, the instructions are rewritten so that it uses the correct max number of registers that are allowed.
In order to write this program, I had to make a class that could parse an ILOC file. I wrote a bunch of tests for that class. Below are my tests (I actually had more, but got rid of them to help shorten this. I also added some comments to help you read it). I did the project in C++, so I used Google's C++ testing framework (googletest) located here.
Before showing you the code... let me say something about the basic structure. Essentially, there is a test class. You get to put a bunch of the general setup stuff in the test class. Then there are test macros called TEST_F's. The testing framework picks up on these and understands that they need to be run as tests. Each TEST_F has 2 arguments, the test class name, and the name of the test (which should be very descriptive... that way if the test fails, you know exactly what failed). You will see the structure of each test is similar: (1) set up some initial stuff, (2) run the method you are testing, (3) verify the output is correct. The way you check (3) is by using macros like EXPECT_*. EXPECT_EQ(expected, result) checks that result is equal to the expected. If it is not, you get a useful error message like "result was blah, but expected Blah".
Here is the code (I hope this isn't terribly confusing... it is certainly not a short or easy example, but if you take the time you should be able to follow and get the general flavor of how it works).
// Unit tests for the iloc_parser.{h, cc}
#include <fstream>
#include <iostream>
#include <gtest/gtest.h>
#include <sstream>
#include <string>
#include <vector>
#include "iloc_parser.h"
using namespace std;
namespace compilers {
// Here is my test class
class IlocParserTest : public testing::Test {
protected:
IlocParserTest() {}
virtual ~IlocParserTest() {}
virtual void SetUp() {
const testing::TestInfo* const test_info =
testing::UnitTest::GetInstance()->current_test_info();
test_name_ = test_info->name();
}
string test_name_;
};
// Here is a utility function to help me test
static void ReadFileAsString(const string& filename, string* output) {
ifstream in_file(filename.c_str());
stringstream result("");
string temp;
while (getline(in_file, temp)) {
result << temp << endl;
}
*output = result.str();
}
// All of these TEST_F things are macros that are part of the test framework I used.
// Just think of them as test functions. The argument is the name of the test class.
// The second one is the name of the test (A descriptive name so you know what it is
// testing).
TEST_F(IlocParserTest, ReplaceSingleInstanceOfSingleCharWithEmptyString) {
string to_replace = "blah,blah";
string to_find = ",";
string replace_with = "";
IlocParser::FindAndReplace(to_find, replace_with, &to_replace);
EXPECT_EQ("blahblah", to_replace);
}
TEST_F(IlocParserTest, ReplaceMultipleInstancesOfSingleCharWithEmptyString) {
string to_replace = "blah,blah,blah";
string to_find = ",";
string replace_with = "";
IlocParser::FindAndReplace(to_find, replace_with, &to_replace);
EXPECT_EQ("blahblahblah", to_replace);
}
TEST_F(IlocParserTest,
ReplaceMultipleInstancesOfMultipleCharsWithEmptyString) {
string to_replace = "blah=>blah=>blah";
string to_find = "=>";
string replace_with = "";
IlocParser::FindAndReplace(to_find, replace_with, &to_replace);
EXPECT_EQ("blahblahblah", to_replace);
}
// This test was suppsoed to strip out the "r" from register
// register names in the ILOC code.
TEST_F(IlocParserTest, StripIlocLineLoadI) {
string iloc_line = "loadI\t1028\t=> r11";
IlocParser::StripIlocLine(&iloc_line);
EXPECT_EQ("loadI\t1028\t 11", iloc_line);
}
// Here I make sure stripping the line works when it has a comment
TEST_F(IlocParserTest, StripIlocLineSubWithComment) {
string iloc_line = "sub\tr12, r10\t=> r13 // Subtract r10 from r12\n";
IlocParser::StripIlocLine(&iloc_line);
EXPECT_EQ("sub\t12 10\t 13 ", iloc_line);
}
// Here I make sure I can break a line up into the tokens I wanted.
TEST_F(IlocParserTest, TokenizeIlocLineNormalInstruction) {
string iloc_line = "sub\t12 10\t 13\n"; // already stripped
vector<string> tokens;
IlocParser::TokenizeIlocLine(iloc_line, &tokens);
EXPECT_EQ(4, tokens.size());
EXPECT_EQ("sub", tokens[0]);
EXPECT_EQ("12", tokens[1]);
EXPECT_EQ("10", tokens[2]);
EXPECT_EQ("13", tokens[3]);
}
// Here I make sure I can create an instruction from the tokens
TEST_F(IlocParserTest, CreateIlocInstructionLoadI) {
vector<string> tokens;
tokens.push_back("loadI");
tokens.push_back("1");
tokens.push_back("5");
IlocInstruction instruction(IlocInstruction::NONE);
EXPECT_TRUE(IlocParser::CreateIlocInstruction(tokens,
&instruction));
EXPECT_EQ(IlocInstruction::LOADI, instruction.op_code());
EXPECT_EQ(2, instruction.num_operands());
IlocInstruction::OperandList::const_iterator it = instruction.begin();
EXPECT_EQ(1, *it);
++it;
EXPECT_EQ(5, *it);
}
// Making sure the CreateIlocInstruction() method fails when it should.
TEST_F(IlocParserTest, CreateIlocInstructionFromMisspelledOp) {
vector<string> tokens;
tokens.push_back("ADD");
tokens.push_back("1");
tokens.push_back("5");
tokens.push_back("2");
IlocInstruction instruction(IlocInstruction::NONE);
EXPECT_FALSE(IlocParser::CreateIlocInstruction(tokens,
&instruction));
EXPECT_EQ(0, instruction.num_operands());
}
// Make sure creating an empty instruction works because there
// were times when I would actually have an empty tokens vector.
TEST_F(IlocParserTest, CreateIlocInstructionFromNoTokens) {
// Empty, which happens from a line that is a comment.
vector<string> tokens;
IlocInstruction instruction(IlocInstruction::NONE);
EXPECT_TRUE(IlocParser::CreateIlocInstruction(tokens,
&instruction));
EXPECT_EQ(IlocInstruction::NONE, instruction.op_code());
EXPECT_EQ(0, instruction.num_operands());
}
// This was a function that helped me generate actual code
// that I could output as a line in my output file.
TEST_F(IlocParserTest, MakeIlocLineFromInstructionAddI) {
IlocInstruction instruction(IlocInstruction::ADDI);
vector<int> operands;
operands.push_back(1);
operands.push_back(2);
operands.push_back(3);
instruction.CopyOperandsFrom(operands);
string output;
EXPECT_TRUE(IlocParser::MakeIlocLineFromInstruction(instruction, &output));
EXPECT_EQ("addI r1, 2 => r3", output);
}
// This test actually glued a bunch of stuff together. It actually
// read an input file (that was the name of the test) and parsed it
// I then checked that it parsed it correctly.
TEST_F(IlocParserTest, ParseIlocFileSimple) {
IlocParser parser;
vector<IlocInstruction*> lines;
EXPECT_TRUE(parser.ParseIlocFile(test_name_, &lines));
EXPECT_EQ(2, lines.size());
// Check first line
EXPECT_EQ(IlocInstruction::ADD, lines[0]->op_code());
EXPECT_EQ(3, lines[0]->num_operands());
IlocInstruction::OperandList::const_iterator operand = lines[0]->begin();
EXPECT_EQ(1, *operand);
++operand;
EXPECT_EQ(2, *operand);
++operand;
EXPECT_EQ(3, *operand);
// Check second line
EXPECT_EQ(IlocInstruction::LOADI, lines[1]->op_code());
EXPECT_EQ(2, lines[1]->num_operands());
operand = lines[1]->begin();
EXPECT_EQ(5, *operand);
++operand;
EXPECT_EQ(10, *operand);
// Deallocate memory
for (vector<IlocInstruction*>::iterator it = lines.begin();
it != lines.end();
++it) {
delete *it;
}
}
// This test made sure I generated an output file correctly.
// I built the file as an in memory representation, and then
// output it. I had a "golden file" that was supposed to represent
// the correct output. I compare my output to the golden file to
// make sure it was correct.
TEST_F(IlocParserTest, WriteIlocFileSimple) {
// Setup instructions
IlocInstruction instruction1(IlocInstruction::ADD);
vector<int> operands;
operands.push_back(1);
operands.push_back(2);
operands.push_back(3);
instruction1.CopyOperandsFrom(operands);
operands.clear();
IlocInstruction instruction2(IlocInstruction::LOADI);
operands.push_back(17);
operands.push_back(10);
instruction2.CopyOperandsFrom(operands);
operands.clear();
IlocInstruction instruction3(IlocInstruction::OUTPUT);
operands.push_back(1024);
instruction3.CopyOperandsFrom(operands);
// Propogate lines with the instructions
vector<IlocInstruction*> lines;
lines.push_back(&instruction1);
lines.push_back(&instruction2);
lines.push_back(&instruction3);
// Write out the file
string out_filename = test_name_ + "_output";
string golden_filename = test_name_ + "_golden";
IlocParser parser;
EXPECT_TRUE(parser.WriteIlocFile(out_filename, lines));
// Read back output file and verify contents are as expected.
string golden_file;
string out_file;
ReadFileAsString(golden_filename, &golden_file);
ReadFileAsString(out_filename, &out_file);
EXPECT_EQ(golden_file, out_file);
}
} // namespace compilers
int main(int argc, char** argv) {
// Boiler plate, test initialization
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
After all is said and done... WHY DID I DO THIS!? Well first of all. I wrote the tests incrementally as I prepared to write each piece of code. It helped give me peace of mind that the code I already wrote was working properly. It would have been insane to write all my code and then just try it out on a file and see what happened. There were so many layers, how could I know where a bug would come from unless I had each little piece tested in isolation?
BUT... MOST IMPORTANTLY!!! Testing is not really about catching initial bugs in your code... it's about protecting yourself from accidentally breaking your code. Every time I refactored or altered my IlocParser class, I was confident I didn't alter it in a bad way because I could run my tests (in a matter of seconds) and see that all the code is still working as expected. THAT is the great use of unit tests.
They seem like they take too much time... but ultimately, they save you time tracking down bugs because you changed some code and don't know what happened. They are a useful way of verifying that small pieces of code are doing what they are supposed to do, and correctly.
In computer programming, unit testing is a software verification and validation method in which a programmer tests that individual units of source code are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a class, which may belong to a base/super class, abstract class or derived/child class.
http://en.wikipedia.org/wiki/Unit_testing
For instance, if you have a matrix class, you might have a unit test checking that
Matrix A = Matrix(.....);
A.inverse()*A ==Matrix::Identity

Running multiple test data on same set of test cases

I am new to eclipse. I am using JUnit 4. and i have written a set up method in my class which extends Testcase where some initialization happens. I have some set of testcases in the same class. I have test data in zipped form and attached to work space.
Currently i am able to run all test cases for a single test data. Somehow i want the control to go back to set up() to take second test data and run all the test cases.
Is it possible? ans if yes can anyone please send some code snippet?
Thanks in advance
Thanks for the reply but where should i keep such code whether it should be kept in set up method and how test data will be taken up from set up?
You need to use the Parameterized runner. It allows you to run the same test with multiple test data. e.g. The following will imply that the tests will run four times, with the parameter "number" changed each time to the value in the array.
#RunWith(value = Parameterized.class)
public class StackTest {
Stack<Integer> stack;
private int number;
public StackTest(int number) {
this.number = number;
}
#Parameters
public static Collection data() {
Object[][] data = new Object[][] { { 1 }, { 2 }, { 3 }, { 4 } };
return Arrays.asList(data);
}
...
}
Edit
Not sure what isn't clear, but I'll attempt to clarify.
The #RunWith(value = Parameterized.class) annotation is required. You must have a method annotated with #Parameters that returns a Collection object, each element of which must be an Array of the various parameters used for the test. You must have a public constructor that will accept these parameters.
Additional information, and another example can be found in the documentation.
Even more examples.

How To Write CRUD Unit Tests for Moq and Linq-to-Sql

I am just getting involved in Moq and unit testing, so forgive me if this seems obvious (a quick search through SO didn't show me anything like this).
I have an interface with the following proposed member:
void AddFeed(Feed feed);
That I would like to write a unit test for this functionality. The test class has a Moq Repository declared as follows:
static IFeedRepository MockFeedsRepository(params Feed[] feeds)
{
var mockFeedsRepository = new Moq.Mock<IFeedRepository>();
mockFeedsRepository.Expect(x => x.Feeds).Returns((feeds.AsQueryable));
return mockFeedsRepository.Object;
}
How should the mock repository declaration be modified to include this new desired behavior or should I create a different Moq (and just how would that be done).
My assumption is that after creating the mock, deriving the unit test will be much easier but hints are greatly appreciated.
Many thanks,
KevDog
I'm also assuming that you would use the AddFeed method like this
Feed myNewFeed = new Feed();
feedRepository.Add(myNewFeed);
and that you're not using it like this (which is poor design)
IFeedRepository feedRepository = new FeedRepository();
Feed myNewFeed = new Feed(feedRepository);
...
myNewFeed.Save();
I'm going to guess that you would want to then have a test something like this:
[Test]
public void TheTest()
{
IFeedRepository repository = MockFeedsRepository({feed1, feed2, feed3});
Feed newFeed = new Feed();
repository.Add(newFeed);
Assert.AreEqual(4,repository.Count());
}
If that's the case then the test isn't actually testing anything other than your implementation of a mock in-memory repository. Is that what you really need to be doing?
I'd suggest that what you want to be doing is testing the L2Sql implementation of the repository instead, or testing how classes interact with the IFeedRepository interface.
And if you want to test the usage of the IFeedRepository interface then just do something simple like
[Test]
public void TheTest()
{
IFeedRepository repository = Moq.Mock<IFeedRepository>();
Feed newFeed = new Feed();
repository.Expect(r => r.Add(newFeed)); //no return as it's a void method
//repository.Expect(r => r.Add(newFeed)).Throws(new ApplicationException()); --Test handing of exceptions
//Code to hit the .Add() method
//Assert the Add method was called.
}
For tips on asserting if a method was called, see Using Moq to determine if a method is called
I hope that helps