xunit programmatically add new tests/"[Facts]"? - unit-testing

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.

The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.

Related

Why doesn't the log4net XmlConfigurator attribute work for my unit tests

I'm using log4net, trying to get logging in my unit tests. If I manually call
log4net.Config.XmlConfigurator.Configure();
Since that works, that seems to eliminate all of the "bad config, config location" issues.
it works, but there are a large number of test classes, so that is not good.
I added
[assembly: log4net.Config.XmlConfigurator(Watch=true)]
to the assemblyinfo of my test project, but when I run (either via native MSTest, or Resharper test runner) I get no logging.
Help?
Source
[AssemblyInitialize()]
public static void MyTestInitialize(TestContext testContext)
{
// Take care the log4net.config file is added to the deployment files of the testconfig
FileInfo fileInfo;
string fullPath = Path.Combine(System.Environment.CurrentDirectory, "log4net.config");
fileInfo = new FileInfo(fullPath);
As it says in the documentation for assembly attributes
Therefore if you use configuration attributes you must invoke log4net
to allow it to read the attributes. A simple call to
LogManager.GetLogger will cause the attributes on the calling assembly
to be read and processed. Therefore it is imperative to make a logging
call as early as possible during the application start-up, and
certainly before any external assemblies have been loaded and invoked.
Because the unit test runners load the test assembly in order to find and the tests, it isn't possible to initialise log4net using an assembly attribute in unit test projects, and you will have to use the XmlConfigurator.
Edit: as linked in a comment by OP this can be done in one place for the whole test project by using the AssemblyInitializeAttribute

iOS Unit Testing: XCTestSuite, XCTest, XCTestRun, XCTestCase methods

In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/

Two assert in the same unit test method, how to make?

I'm starting out using unit tests. I have a situation and don't know how to proceed:
For example:
I have a class that opens and reads a file.
In my unit test, I want to test the open method and the read method, but to read the file I need to open the file first.
If the "open file" test fails, the "read file" test would fail too!
So, how to explicit that the read fail because the open? I test the open inside the read??
The key feature of unit tests is isolation: one specific unit test should cover one specific functionality - and if it fails, it should report it.
In your example, read clearly depends on open functionality: if the latter is broken, there's no reason to test the former, as we do know the result. More, reporting read failure will only add some irrelevant noise to your test results.
What can (and should be) reported for read in this case is test skipped or something similar. That's how it's done in PHPUnit, for example:
class DependencyFailureTest extends PHPUnit_Framework_TestCase
{
public function testOne()
{
$this->assertTrue(FALSE);
}
/**
* #depends testOne
*/
public function testTwo()
{
}
}
Here we have testTwo dependant on testOne. And that's what's shown when the test is run:
There was 1 failure:
1) testOne(DependencyFailureTest)
Failed asserting that <boolean:false> is true.
/home/sb/DependencyFailureTest.php:6
There was 1 skipped test:
1) testTwo(DependencyFailureTest)
This test depends on "DependencyFailureTest::testOne" to pass.
FAILURES!
Tests: 2, Assertions: 1, Failures: 1, Skipped: 1.
Explanation:
To quickly localize defects, we want our attention to be focused on
relevant failing tests. This is why PHPUnit skips the execution of a
test when a depended-upon test has failed.
Opening the file is a prerequisite to reading the file, so it's fine to include that in the test. You can throw an exception in your code if the file failed to open. The error message in the test will then make it clear why the test failed.
I would also recommend that you consider creating the file in the test itself to remove any dependencies on existing files. That way you ensure that you always have a valid file to reference.
Generally speaking, you wouldn't find yourself testing your proposed scenario of unit testing the ability to read from a file, since you will usually end up using a file manipulation library of some kind and can usually safely assume that the maintainers of said library have the appropriate unit tests already in place (for example, I feel pretty confident that I can use the File class in .NET without worry).
That being said, the idea of one condition being an impediment to testing a second is certainly valid. That's why mock frameworks were created, so that you can easily set up a mock object that will always behave in a defined manner that can then be substituted for the initial dependency. This allows you to focus squarely on unit testing the second object/condition/etc. in a test scenario.

AngularJS mocking $logProvider in config block

Is there a way to inject providers when writing unit tests using Karma(Testacular) and Jasmine in angular?
Our team decided recently to use angularjs $log to write debugging details to the console. This way we can leverage the ability to disable the logging via the $logProvider.debugEnabled() method.
angular.module("App", ["prismLogin", "ui.bootstrap"])
.config(["$routeProvider", "$logProvider",
function ($routeProvider, $logProvider) {
$routeProvider
//routes here edited for brevity
//This is the offending line, it breaks several pre-existing tests
$logProvider.debugEnabled(true);
}]);
However after adding the $logProvider.debugEnabled(true); line several of our tests no longer execute successfully, failing with the following message:
TypeError: Object doesn't support property or method 'debugEnabled' from App
So my question again, is it possible to mock the $logProvider? Or should I provide my own configuration block for the test harness?
I attempted searching for a way to mock the app module with no luck. It seems to me that using the concrete app module instead of a mock is very brittle. I would like to avoid reworking tests associated with the app module every time a change is made in the app or run configuration blocks.
The tests that are failing are units of code with no relation to the $logProvider? I feel as if I a missing something here and making things much harder than they should be. How should one go about writing tests that are flexible and are not affected by other side effects introduced in your application?
It appears that this is a know issue with angular-mocks.
Until the issue is addressed , I was able to resolve the issue by adding the following method to the angular.mock.$LogProvider definition in angular-mocks.js at line 295.
this.debugEnabled = function(flag) {
return this;
};

Unit testing code with a file system dependency

I am writing a component that, given a ZIP file, needs to:
Unzip the file.
Find a specific dll among the unzipped files.
Load that dll through reflection and invoke a method on it.
I'd like to unit test this component.
I'm tempted to write code that deals directly with the file system:
void DoIt()
{
Zip.Unzip(theZipFile, "C:\\foo\\Unzipped");
System.IO.File myDll = File.Open("C:\\foo\\Unzipped\\SuperSecret.bar");
myDll.InvokeSomeSpecialMethod();
}
But folks often say, "Don't write unit tests that rely on the file system, database, network, etc."
If I were to write this in a unit-test friendly way, I suppose it would look like this:
void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner)
{
string path = zipper.Unzip(theZipFile);
IFakeFile file = fileSystem.Open(path);
runner.Run(file);
}
Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc.
It doesn't feel like I'm testing functionality anymore. It feels like I'm just testing class interactions.
My question is this: what's the proper way to unit test something that is dependent on the file system?
edit I'm using .NET, but the concept could apply Java or native code too.
Yay! Now it's testable; I can feed in test doubles (mocks) to the DoIt method. But at what cost? I've now had to define 3 new interfaces just to make this testable. And what, exactly, am I testing? I'm testing that my DoIt function properly interacts with its dependencies. It doesn't test that the zip file was unzipped properly, etc.
You have hit the nail right on its head. What you want to test is the logic of your method, not necessarily whether a true file can be addressed. You don´t need to test (in this unit test) whether a file is correctly unzipped, your method takes that for granted. The interfaces are valuable by itself because they provide abstractions that you can program against, rather than implicitly or explicitly relying on one concrete implementation.
Your question exposes one of the hardest parts of testing for developers just getting into it:
"What the hell do I test?"
Your example isn't very interesting because it just glues some API calls together so if you were to write a unit test for it you would end up just asserting that methods were called. Tests like this tightly couple your implementation details to the test. This is bad because now you have to change the test every time you change the implementation details of your method because changing the implementation details breaks your test(s)!
Having bad tests is actually worse than having no tests at all.
In your example:
void DoIt(IZipper zipper, IFileSystem fileSystem, IDllRunner runner)
{
string path = zipper.Unzip(theZipFile);
IFakeFile file = fileSystem.Open(path);
runner.Run(file);
}
While you can pass in mocks, there's no logic in the method to test. If you were to attempt a unit test for this it might look something like this:
// Assuming that zipper, fileSystem, and runner are mocks
void testDoIt()
{
// mock behavior of the mock objects
when(zipper.Unzip(any(File.class)).thenReturn("some path");
when(fileSystem.Open("some path")).thenReturn(mock(IFakeFile.class));
// run the test
someObject.DoIt(zipper, fileSystem, runner);
// verify things were called
verify(zipper).Unzip(any(File.class));
verify(fileSystem).Open("some path"));
verify(runner).Run(file);
}
Congratulations, you basically copy-pasted the implementation details of your DoIt() method into a test. Happy maintaining.
When you write tests you want to test the WHAT and not the HOW.
See Black Box Testing for more.
The WHAT is the name of your method (or at least it should be). The HOW are all the little implementation details that live inside your method. Good tests allow you to swap out the HOW without breaking the WHAT.
Think about it this way, ask yourself:
"If I change the implementation details of this method (without altering the public contract) will it break my test(s)?"
If the answer is yes, you are testing the HOW and not the WHAT.
To answer your specific question about testing code with file system dependencies, let's say you had something a bit more interesting going on with a file and you wanted to save the Base64 encoded contents of a byte[] to a file. You can use streams for this to test that your code does the right thing without having to check how it does it. One example might be something like this (in Java):
interface StreamFactory {
OutputStream outStream();
InputStream inStream();
}
class Base64FileWriter {
public void write(byte[] contents, StreamFactory streamFactory) {
OutputStream outputStream = streamFactory.outStream();
outputStream.write(Base64.encodeBase64(contents));
}
}
#Test
public void save_shouldBase64EncodeContents() {
OutputStream outputStream = new ByteArrayOutputStream();
StreamFactory streamFactory = mock(StreamFactory.class);
when(streamFactory.outStream()).thenReturn(outputStream);
// Run the method under test
Base64FileWriter fileWriter = new Base64FileWriter();
fileWriter.write("Man".getBytes(), streamFactory);
// Assert we saved the base64 encoded contents
assertThat(outputStream.toString()).isEqualTo("TWFu");
}
The test uses a ByteArrayOutputStream but in the application (using dependency injection) the real StreamFactory (perhaps called FileStreamFactory) would return FileOutputStream from outputStream() and would write to a File.
What was interesting about the write method here is that it was writing the contents out Base64 encoded, so that's what we tested for. For your DoIt() method, this would be more appropriately tested with an integration test.
There's really nothing wrong with this, it's just a question of whether you call it a unit test or an integration test. You just have to make sure that if you do interact with the file system, there are no unintended side effects. Specifically, make sure that you clean up after youself -- delete any temporary files you created -- and that you don't accidentally overwrite an existing file that happened to have the same filename as a temporary file you were using. Always use relative paths and not absolute paths.
It would also be a good idea to chdir() into a temporary directory before running your test, and chdir() back afterwards.
I am reticent to pollute my code with types and concepts that exist only to facilitate unit testing. Sure, if it makes the design cleaner and better then great, but I think that is often not the case.
My take on this is that your unit tests would do as much as they can which may not be 100% coverage. In fact, it may only be 10%. The point is, your unit tests should be fast and have no external dependencies. They might test cases like "this method throws an ArgumentNullException when you pass in null for this parameter".
I would then add integration tests (also automated and probably using the same unit testing framework) that can have external dependencies and test end-to-end scenarios such as these.
When measuring code coverage, I measure both unit and integration tests.
There's nothing wrong with hitting the file system, just consider it an integration test rather than a unit test. I'd swap the hard coded path with a relative path and create a TestData subfolder to contain the zips for the unit tests.
If your integration tests take too long to run then separate them out so they aren't running as often as your quick unit tests.
I agree, sometimes I think interaction based testing can cause too much coupling and often ends up not providing enough value. You really want to test unzipping the file here not just verify you are calling the right methods.
One way would be to write the unzip method to take InputStreams. Then the unit test could construct such an InputStream from a byte array using ByteArrayInputStream. The contents of that byte array could be a constant in the unit test code.
This seems to be more of an integration test as you are depending on a specific detail (the file system) that could change, in theory.
I would abstract the code that deals with the OS into it's own module (class, assembly, jar, whatever). In your case you want to load a specific DLL if found, so make an IDllLoader interface and DllLoader class. Have your app acquire the DLL from the DllLoader using the interface and test that .. you're not responsible for the unzip code afterall right?
Assuming that "file system interactions" are well tested in the framework itself, create your method to work with streams, and test this. Opening a FileStream and passing it to the method can be left out of your tests, as FileStream.Open is well tested by the framework creators.
You should not test class interaction and function calling. instead you should consider integration testing. Test the required result and not the file loading operation.
As others have said, the first is fine as an integration test. The second tests only what the function is supposed to actually do, which is all a unit test should do.
As shown, the second example looks a little pointless, but it does give you the opportunity to test how the function responds to errors in any of the steps. You don't have any error checking in the example, but in the real system you may have, and the dependency injection would let you test all the responses to any errors. Then the cost will have been worth it.
For unit test I would suggest that you include the test file in your project(EAR file or equivalent) then use a relative path in the unit tests i.e. "../testdata/testfile".
As long as your project is correctly exported/imported than your unit test should work.