W.r.t. Nunit;
Is there a mechanism to conditionally ignore a specific test case?
Something in the lines of :
[TestCase(1,2)]
[TestCase(3,4, Ignore=true, IgnoreReason="Doesn't meet conditionA", Condition=IsConditionA())]
public voidTestA(int a, int b)
So is there any such mechanism or the only way to do so is to create separate test for each case and do Assert.Ignore in the test body?
You could add the following to the body of the test:
if (a==3 && b == 4 && !IsConditionA()) { Assert.Ignore() }
This you would have to do for every testcase you would want to ignore.
You would not replicate the testbody in this case, but you would add to it for every ignored testcase.
I think it helps test readability to minimize the conditional logic inside the test body. But you can definitely generate the tests cases dynamically using the testcasesource attribute on the test and in a separate method dynamically generate a list of test cases to run using the nunit testcasedata object.
So only the tests that you need to/ are valid to execute are run but you still have a chance to log etc the cases.
http://www.nunit.org/index.php?p=testCaseSource&r=2.6.4
Related
I have a class to unit test that has 2 dispatchers injected, one that is Dispatchers.Main and one that is Dispatchers.IO. My code switches between these two via withContext() at some point and then switches back via .collect(). I would like my unit test for this code to make an assertion on what dispatcher I am supposed to be on. My test code currently just uses two instances of TestCoroutineDispatcher.
Does anyone know how I can do this?
One ok thing I have tried is modifying my production code to include launch(mainDispatcher + CoroutineName("someUniqueName")) { ... } where I can then assert on looking for the substring in Thread.currentThread().name in my unit test, but it really does not feel ideal and rather hacky to modify my prod code just for that.
Ideally, a test class is written for every class in the production code. In test class, all the test methods may not require the same preconditions. How do we solve this problem?
Do we create separate test classes for these?
I suggest creating separate methods wrapping necessary precondition setup. Do not confuse this approach with traditional test setup. As an example, assume you wrote tests for receipt provider, which searches repository and depending on some validation steps, returns receipt. We might end-up with:
receipt doesn't exist in repository: return null
receipt exists, but doesn't match validator date: return null
receipt exists, matches validator date, but was not fully committed (i.e. was not processed by some external system): return null
We have several conditions here: receipt exists/doesn't exist, receipt is invalid date-wise, receipt is not commited. Our happy path is the default setup (for example done via traditional test setup). Then, happy path test would be as simple as (some C# pseudo-code):
[Test]
public void GetReceipt_ReturnsReceipt()
{
receiptProvider.GetReceipt("701").IsNotNull();
}
Now, for the special condition cases we simply write tiny, dedicated methods that would arrange our test environment (eg. setup dependencies) so that conditions are met:
[Test]
public void GetReceipt_ReturnsNull_WhenReceiptDoesntExist()
{
ReceiptDoesNotExistInRepository("701")
receiptProvider.GetReceipt("701").IsNull();
}
[Test]
public void GetReceipt_ReturnsNull_WhenExistingReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
receiptProvider.GetReceipt("701").IsNull();
}
You'll end up with couple extra helper methods, but your tests will be much easier to read and understand. This is especially helpful when logic is more complicated than simple yes-no setup:
[Test]
public void GetReceipt_ThrowsException_WhenUncommittedReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
ReceiptIsUncommitted("701");
receiptProvider.GetReceipt("701").Throws<Exception>();
}
It's an option to group tests with the same preconditions in the same classes, this also helps avoiding test classes of over a thousand lines. You can also group the creation of the preconditions in seperate methods and let each test call the applicable method. You can do this when most of the methods have different preconditions, otherwise you could just use a setup method that is called before the test.
I like to use a Setup method that will get called before each test runs. In this method I instantiate the class I want to test, giving it any dependencies it needs to be created. Then I will set the specific details for the individual tests inside the test method. It moves any common initialization of the class out to the setup method and allows the test to be focused on what it needs to be evaluated.
You may find this link valuable, it discusses an approach to Test Setups:
In Defense of Test Setup Methods, by Erik Dietrich
In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/
I have this piece of code
function aFunctionForUnitTesting(aParameter)
return (aFirstCheck(aParameter) &&
aOtherOne(aParameter) &&
aLastOne(aParameter)
);
How can I unit test this ?
My problem is the following, let's say I create this unit test :
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameter));
}
How do I know that this test is ok because of the first check (it might have failed because of the second or the third, so my function firstCheck might be bugged) ?
You need to pass a different value of your parameter, that will pass the first check and fail on the second.
FailWhenParameterDoesntFillFirstCheck()
{
Assert.IsFalse(new myObject().aFunctionForUnitTesting(aBadParameterThatPassesFirstCheckButFailsTheSecond));
}
You don't have to write all these cases. It depends on what level of code coverage you want. You can get 'modified condition/decision coverage' in four tests:
- all params ok
- one of each param failing the test
'Multiple condition coverage' requires 8 tests here. Not sure I'd bother unless I was working for Airbus :-)
I'm writing some NUnit tests for database operations. Obviously, if Add() fails, then Get() will fail as well. However, it looks deceiving when both Add() and Get() fail because it looks like there's two problems instead of just one.
Is there a way to specify an 'order' for tests to run in, in that if the first test fails, the following tests are ignored?
In the same line, is there a way to order the unit test classes themselves? For example, I would like to run my tests for basic database operations first before the tests for round-tripping data from the UI.
Note: This is a little different than having tests depend on each other, it's more like ensuring that something works first before running a bunch of tests. It's a waste of time to, for example, run a bunch of database operations if you can't get a connection to the database in the first place.
Edit: It seems that some people are missing the point. I'm not doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
db.Get(someData);
Assert.That(data was retrieved successfully);
}
Rather, I'm doing this:
[Test]
public void AddTest()
{
db.Add(someData);
}
[Test]
public void GetTest()
{
// need some way here to ensure that db.Add() can actually be performed successfully
db.Add(someData);
db.Get(somedata);
Assert.That(data was retrieved successfully);
}
In other words, I want to ensure that the data can be added in the first place before I can test whether it can be retrieved. People are assuming I'm using data from the first test to pass the second test when this is not the case. I'm trying to ensure that one operation is possible before attempting another that depends on it.
As I said already, you need to ensure you can get a connection to the database before running database operations. Or that you can open a file before performing file operations. Or connect to a server before testing API calls. Or...you get the point.
NUnit supports an "Assume.That" syntax for validating setup. This is documented as part of the Theory (thanks clairestreb). In the NUnit.Framework namespace is a class Assume. To quote the documentation:
/// Provides static methods to express the assumptions
/// that must be met for a test to give a meaningful
/// result. If an assumption is not met, the test
/// should produce an inconclusive result.
So in context:
public void TestGet() {
MyList sut = new MyList()
Object expecting = new Object();
sut.Put(expecting);
Assume.That(sut.size(), Is(1));
Assert.That(sut.Get(), Is(expecting));
}
Tests should never depend on each other. You just found out why. Tests that depend on each other are fragile by definition. If you need the data in the DB for the test for Get(), put it there in the setup step.
I think the problem is that you're using NUnit to run something other than the sort of Unit Tests that NUnit was made to run.
Essentially, you want AddTest to run before GetTest, and you want NUnit to stop executing tests if AddTest fails.
The problem is that that's antithetical to unit testing - tests are supposed to be completely independent and run in any order.
The standard concept of Unit Testing is that if you have a test around the 'Add' functionality, then you can use the 'Add' functionality in the 'Get' test and not worry about if 'Add' works within the 'Get' test. You know 'Add' works - you have a test for it.
The 'FIRST' principle (http://agileinaflash.blogspot.com/2009/02/first.html) describes how Unit tests should behave. The test you want to write violates both 'I' (Isolated) and 'R' (Repeatable).
If you're concerned about the database connection dropping between your two tests, I would recommend that rather than connect to a real database during the test, your code should use some sort of a data interface, and for the test, you should be using a mock interface. If the point of the test is to exercise the database connection, then you may simply be using the wrong tool for the job - that's not really a Unit test.
I don't think that's possible out-of-box.
Anyway, your test class design as you described will make the test code very fragile.
MbUnit seems to have a DependsOnAttribute that would allow you to do what you want.
If the other test fixture or test
method fails then this test will not
run. Moreover, the dependency forces
this test to run after those it
depends upon.
Don't know anything about NUnit though.
You can't assume any order of test fixture execution, so any prerequisites have to be checked for within your test classes.
Segregate your Add test into one test-class e.g. AddTests, and put the Get test(s) into another test-class, e.g. class GetTests.
In the [TestFixtureSetUp] method of the GetTests class, check that you have working database access (e.g. that Add's work), and if not, Assert.Ignore or Inconclusive, as you deem appropriate.
This will abort the GetTests test fixture when its prerequisites aren't met, and skip trying to run any of the unit tests it contains.
(I think! I'm an nUnit newbie.)
Create a global variable and return in the test for Get unless Add set it to true (do this in the last line of Add):
public boolean addFailed = false;
public void testAdd () {
try {
... old test code ...
} catch (Throwable t) { // Catch all errors
addFailed = true;
throw t; // Don't forget to rethrow
}
}
public void testGet () {
if (addFailed) return;
... old test code ...
}