Condition Coverage and Unit Testing - unit-testing

when writing unit tests (with junit), is it necessary to separate test methods to reach a complete condition coverage?
take this snippet for example:
int foo(boolean a, boolean b, boolean c){
if(a && b && c)
return 1;
else return 0;
}
Is it better to write one method and different assertions for condition coverage if this if? Or one method for each condition?
#Test
void conditionsTest(){
assertEqual(0, foo(true, false, false));
assertEqual(0, foo(true, true, false));
assertEqual(1, foo(true, true, true));
...
}
OR
#Test
void condition1Test(){
assertEqual(0, foo(true, false, false));
}
#Test
void condition2Test(){
assertEqual(0, foo(true, true, false));
}
#Test
void condition3Test(){
assertEqual(1, foo(true, true, true));
}

While it is not necessary to split the tests it can be better to do so:
The all-in-one test function will abort execution on the first failed assertion. No information on the other combinations. When separated you will have more detailed information on which combinations fail exactly. Note that parameterized tests will achieve the same.
From a coverage point of view the all-in-one test will amass the coverage in a single test. With no way to analyze the contribution of each combination of conditions. This becomes even more valuable when going for more complex condition coverage metrics like MC/DC.
Granted: the second aspect depends on the granularity of data collection provided by your coverage tool. Our company's tool Coco collects coverage per test function but does so for C, C++ and C# only so far. You may want to look for the equivalent capability for Java code.

Related

Allure Framework: How to fail only one step in the test method

Does anyone know, how to fail only one step in the test and allow the test finish all steps, using Allure framework!
For exemple, I have one test wich consists of 3 test steps, and each of the steps has it's own assertion. It can look like this:
#Test
public void test()
step1();
step2();
step3();
}
#Step
public void step1() {
Assert.assertEquals(1, 0);
}
#Step
public void step2() {
Assert.assertEquals(1, 1);
}
#Step
public void step3() {
Assert.assertEquals(2, 2);
}
When step1 fail, then test method will fail too. Is there a possibility to finish other two steps with their own assertions and not fail the test? Like TestNG does with SoftAssert. (org.testng.asserts.SoftAssert)
And as a result I would like to see the report where we can see all broken and passed test steps,(in one test method) like in 1.4.9 Allure release https://github.com/allure-framework/allure-core/releases/tag/allure-core-1.4.9 on the picture report.
Maybe you can, but you shouldn't. You're breaking the concept of a test. A test is something that either passes or fails with a description of a failure. It is not something that can partially fail.
When you write a test you should include only those assertions that are bound to each other. Like if the first assertion fails, then the second is not needed by your functionality at all. That means if you have assertions that are not dependent on each other – you better make a couple of test methods and they will be completely separated and will fail separately.
In short, the test should not continue after a failed step and that's it. Otherwise – it's a bad test.
P.S. That's why JUnit does not allow soft assertions.
P.P.S If you reallyreallyreally need to check all the three things – possible workaround is using an ErrorCollector.

Unit test thoroughness - test passing conditions as well as failing ones?

Should unit tests test all passing conditions as well as all failing conditions?
For example, imagine I have a test Widget_CannotActiveWidgetIfStateIsCancelled.
And let's say there are 100 possible states.
Can I get away with testing only that I cannot activate my widget when State == Cancelled, or do I have to also test that I CAN activate it in each of the other 99 states?
Is there some compromise that can let me avoid spending all my time writing tests? :)
It seems you are asking whether your tests should be exhaustive: whether you should test for all possible states. The answer is a resounding no, for the simple reason that even simple code can have far too many states. Even small programs can have more potential states than can be tested even if you used all the time there has been since the big bang.
You should instead use equivalence partitioning: identify groups of states, such that all the states in a group are likely to have similar behaviour, then have one test case per group.
If you do that, you might discover you need only two test cases.
This is a scenario where you want to use one parametrized test which gets all 99 values as input.
Using xUnit.net, this could look like this (untested, might contain small compilation errors):
[Fact]
public void Widget_CannotActiveWidgetIfStateIsCancelled()
{
// Arrange ...
sut.State = State.Cancelled;
Assert.False(sut.CanActivate);
}
[Theory, ValidStatesData]
public void Widget_CanActivateWidgetIfStateIsNotCancelled(State state)
{
// Arrange ...
sut.State = state;
Assert.True(sut.CanActivate);
}
private class ValidStatesDataAttribute : DataAttribute
{
public override IEnumerable<object[]> GetData(
MethodInfo methodUnderTest, Type[] parameterTypes)
{
return Enum.GetValues(typeof(State))
.Cast<State>()
.Except(new [] { State.Cancelled })
.Select(x => new object[] { x });
}
}
If you're using NUnit you can use attributes so you only have to code one test but can test all 100 values.

DRYing Up EasyMock Tests

It seems like EasyMock tests tend to follow the following pattern:
#Test
public void testCreateHamburger()
{
// set up the expectation
EasyMock.expect(mockFoodFactory.createHamburger("Beef", "Swiss", "Tomato", "Green Peppers", "Ketchup"))
.andReturn(mockHamburger);
// replay the mock
EasyMock.replay(mockFoodFactory);
// perform the test
mockAverager.average(chef.cookFood("Hamburger"));
// verify the result
EasyMock.verify(mockFoodFactory);
}
This works fine for one test, but what happens when I want to test the same logic again in a different method? My first thought is to do something like this:
#Before
public void setUp()
{
// set up the expectation
EasyMock.expect(mockFoodFactory.createHamburger("Beef", "Swiss", "Tomato", "Green Peppers", "Ketchup"))
.andReturn(mockHamburger);
// replay the mock
EasyMock.replay(mockCalculator);
}
#After
public void tearDown()
{
// verify the result
EasyMock.verify(mockCalculator);
}
#Test
public void testCreateHamburger()
{
// perform the test
mockAverager.average(chef.cookFood("Hamburger"));
}
#Test
public void testCreateMeal()
{
// perform the test
mockAverager.average(chef.cookMeal("Hamburger"));
}
There's a few fundamental problems with this approach. The first is that I can't have any variation in my method calls. If I want to test person.cookFood("Turkey Burger"), my set up method wouldn't work. The second problem is that my set up method requires createHamburger to be called. If I call person.cookFood("Salad"), then this might not be applicable. I could use anyTimes() or stubReturn() with EasyMock to avoid this problem. However, these methods only verify if a method is called, it's called with certain parameters, not if the method was actually called.
The only solution that's worked so far is to copy and paste the expectations for every test and vary the parameters. Does anybody know any better ways to test with EasyMock which maintain the DRY principle?
The problems you are running into are because Unit Tests should be DAMP not DRY. Unit tests will tend to repeat themselves. If you can remove the repetition in a safe way (so that it doesnt create unnecessarily coupled tests), then go for it. If not, then don't force it. Unit tests should be quick and easy...if they aren't then you are spending too much time testing instead of writing business value.
Just my two cents. BTW, the Art of Unit Testing by Roy Osherove is a great read on unit testing, and covers this topic.

Running TestNG test sequentially with time-gap

I have couple of DAO unit test classes that I want to run together using TestNG, however TestNG tries to run them in parallel which results in some rollbacks failing. While I would like to run my Unit Test classes run sequentially, I also want to be able to specify a minimum time that TestNG must wait before it runs the next test. Is this achievable?
P.S. I understand that TestNG can be told to run all the tests in a test class in a SingleThread, I am able to specify the sequence of method calls anyway using groups, so that's not an issue perhaps.
What about a hard dependency between the 2 tests? If you write that:
#Test
public void test1() { ... }
#Test(dependsOnMethods = "test1", alwaysRun = true)
public void test2() { ... }
then test2 will always be run after test1.
Do not forget alwaysRun = true, otherwise if test1 fails, test2 will be skipped!
If you do not want to run your classes in parallel, you need to specify the parallel attribute of your suite as false. By default, it's false. So I would think that it should run sequentially by default, unless you have some change in the way you invoke your tests.
For adding a bit of delay between your classes, you can probably add your delay logic in a method annotated with #AfterClass. AFAIK testng does not have a way to specify that in a testng xml or commandline. There is a timeout attribute but that is more for timing out tests and is not probably what you are looking for.
For adding delay between your tests i.e. test tags in xml, then you can try implementing the ITestListener - onFinish method, wherein you can add your delay code. It is run after every <test>. If a delay is required after every testcase, then implement delay code in IInvokedMethodListener - AfterInvocation() which is run after every test method runs. You would need to specify the listener when you invoke your suite then.
Hope it helps..
Following is what I used in some tests.
First, define utility methods like this:
// make thread sleep a while, so that reduce effect to subsequence operation if any shared resource,
private void delay(long milliseconds) throws InterruptedException {
Thread.sleep(milliseconds);
}
private void delay() throws InterruptedException {
delay(500);
}
Then, call the method inside testing methods, at end or beginning.
e.g
#Test
public void testCopyViaTransfer() throws IOException, InterruptedException {
copyViaTransfer(new File(sourcePath), new File(targetPath));
delay();
}

How do I ignore a test based on another test in NUnit?

I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}