Is there an auto test framework that tracks passing asserts? - unit-testing

I'm looking for a unit test framework that tracks every assert in the code, pass or fail. I looked into Google Test, which is based on xUnit, and it only tracks failures. I need this because I work in a company that makes medical devices and we must keep evidence of validation that can be audited by the FDA. We want a test report that tells you what the test did, not just that it passed. Also, the framework would have to be usable with POSIX C++.
Ideally what I would like to have is something like this (using Google Test syntax):
EXPECT_EQ(1, x, "checking x value");
and the test would generate a report that has the following for every assert: a description, the expected value, the actual value, the comparison type, and a pass/fail status.
It looks like I'll have to create my own test framework to accomplish this. I stepped into the code of Google Test to verify it really does nothing for a passing assert. I wanted to see if there were other ideas, such as a framework that could accomplish this or be modified to accomplish this before creating my own.

Why not simply generate a json/xml/html report as part of your build process and then check that file into some kind of source control?

Related

store results of automatic tests and show results in a web UI

I'm looking for a piece (or a set) of software that allows to store the outcome (ok/failed) of an automatic test and additional information (the test protocol to see the exact reason for a failure and the device state at the end of a test run as a compressed archive). The results should be accessible via a web UI.
I don't need fancy pie charts or colored graphs. A simple table is enough. However, the user should be able to filter for specific test runs and/or specific tests. The test runs should have a sane name (like the version of the software that was tested, not just some number).
Currently the build system includes unit tests based on cmake/ctest whose results should be included. Furthermore, integration testing will be done in the future, where the actual tests will run on embedded hardware controlled via network by a shell script or similar. The format of the test results is therefore flexible and could be something like subunit or TAP, if that helps.
I have played around with Jenkins, which is said to be great for automatic tests, but the plugins I tried to make that work don't seem to interact well. To be specific: the test results analyzer plugin doesn't show tests imported with the TAP plugin, and the names of the test runs are just a meaningless build number, although I used the Job Name Setter plugin to set a sensible job name. The filtering options are limited, too.
My somewhat uneducated guess is that I'll stumple about similar issues if I try other random tools of the same class like Jenkins.
Is anyone aware of a solution for my described testing scenario? Lightweight/open source software is preferred.

Isolation during unit testing and duplicate code for test data

I am working on some application in Java and writing JUnit tests. I have a design question about Unit Testing. I have one class that reads a file and Create object called Song by reading different lines and parsing based on some algorithm. I have written some unit test on that. Next step after parsing is to actually convert that song to a different format based on some properties of Song object. I have another class that works as a translator. There is a method translate that takes Song object as input. Now in unit test for translator. I need a Song object with all valid properties. I am confused here that should I create a new Song object by putting same functionality as in parser or should I call the parser service to do that for me. I feel it will not be isolated if I take the second option. But in first option it's like duplicate code. Can somebody guide me on this?
There's nothing wrong in using a Builder in order to create the input data for a SUT invocation when this data is complex, however I see 2 risks here.
If the builder fails your test will fail too, but it shouldn't. As you said unit tests should be isolated from external code.
If you use code coverage as a metric to evaluate how good your unit tests are (I don't mean this is right), by looking at the builder's coverage you'll be tempted to think it's tested though obviously isn't.
My opinion is there's not a best solution fitting all the scenarios. In case the input data is not very complex try to build it "manually", otherwise use the builder.

How can I determine the name of my unit test before its execution?

I was using MSTest and all was fine. Not long ago, I needed to write a large number of data driven unit tests.
Moreover, I needed to know the name of the test just before I run it, so I could populate the data sources with the correct parameters (that were fetched from an external remote service).
Nowhere in MSTest could I find a way to get the name of the tests that are about to run before their actual execution. At this point it was, of course, already too late, since the data sources were already populated.
What I need is to know the names of the test that are about to execute so I could configure their data sources in advance, before their execution.
Somebody suggested I "check out NUnit". I am completely clueless about NUnit. For now I have started reading its documentation but am still at a loss. Have you any advice?
If you really need the test's name -- It's not well documented, but NUnit exposes a feature that let's you get access to the current test information:
namespace NUnitOutput.Example
{
using NUnit.Framework;
[TestFixture]
public class Demo
{
[Test]
public void WhatsMyName()
{
Console.WriteLine(TestContext.CurrentContext.Test.FullName);
Console.WriteLine(TestContext.CurrentContext.Test.Name);
}
}
}
Provides:
NUnitOutput.Example.Demo.WhatsMyName
WhatsMyName
Note this feature isn't guaranteed to implemented by custom TestRunners, like ReSharper. I have tested this in NUnit 2.5.9 (nunit.exe and nunit-console.exe)
However, re-reading your question I think you should check out is the TestCaseSource or TestCase attribute that can be used to parameterize your tests.
If I'm understanding your problem correctly, you want to get the name of the currently-running test so that you can use it as a key to look up a set of data with which to populate the data sources used by the code under test. Is that right?
If that's the case then I don't think you need to look for special functionality in your unit testing framework. Why not use the Reflection API to fetch the name of the currently-executing method? System.Reflection.MethodBase.GetCurrentMethod() will get you a MethodBase object representing the method, and that has a Name property.
However, I'd suggest that using the method name as a key for looking up the appropriate test data is a bad idea. You'd be coupling the name of the method to your data set, and that seems like a recipe for fragile code to me. You need to be remain free to refactor your code and the names of methods without worrying about whether that will break the database lookup behind-the-scenes.
As an alternative, why not consider creating a custom Attribute that you can use to mark those test methods that need a database lookup, and using a property on that attribute to hold the key?
For things like this you should rely on a fixture to initialize the state you want before you run the test.
The simplest way which works in (any) testing framework is to create a fixture which loads any data given a data identifier (string). Then in each test case you just provide the test string for data lookup for the data you want in that test.
Aside from this, it's not recommended to have unit tests access files and other external resources because it means slower unit tests and higher probability of failure (as you're relying on something outside the in-memory code). This of course depends on the amount of data you have and the type of testing you're doing, but I generally have the data compiled-in for unit tests.

Unit Testing show output

I searched for unit testing tool and I found the suitable one is NUnit and I think it good but my problem that this tool only show test method result only (pass or fail) and I need to show not only pass or fail also the output .How can I show the output using NUnit or if there another unit testing tool its also good ?If its not supported please suggest me how can I solve it.
All ideas are welcomed
Piping the output of System.Console will work for NUnit, but it's not your best option.
For passing tests, you shouldn't need to review the console output to verify that the tests have passed. If you are, you're doing it wrong. Tests should be automated and repeatable without human intervention. Verifying by hand doesn't scale and creates false positives.
On the other hand, having console output for failing tests is helpful, but it will only provide information that could otherwise be inferred from attaching a debugger. That's a lot of extra effort to add console logging to your application for little benefit.
Instead, make sure that your error messages are meaningful. When writing your tests make sure your assertions are explicit. Always try to use the assertion that closely fits the object you are asserting and provide a failure message that explains why the test is important.
For example:
// very bad
Assert.IsTrue( collection.Count == 23 );
The above assertion doesn't really provide much help when the test fails. As NUnit formats the output of the assertions, this assertion won't help you as it will state something like "expecting <True> but was <False>".
A more appropriate assert will provide more meaningful test failures.
// much better
Assert.AreEqual(23, collection.Count,
"There should be a minimum of 23 items by default.");
This provides a much more meaningful failure message: "Expecting <23> but was <0>: There should be a minimum of 23 items by default."
On the bottom bar of NUnit you can click Text Output and that shows all debug and console output.
It depends where are you want to output the data from a test.
I believe you mentioned something another from File, Log, Console, Debug output.
As an alternative NUnit allow to output any message in the regular tests output stream, just use following utility methods:
For successful test
Assert.Pass( string message, object[] parms );
For failed test
Assert.Fail( string message, object[] parms );
More details see here
This post is loooooong after the question was asked, but wanted to chime in. Yes, you can accomplish a lot in unit/integration tests and probably do most of what you need. So, I agree, do as much as you can in your test methods.
But sometimes, providing some output is useful. Especially if you need to further verify the results and that verification cannot be accomplished via your unit test. Think of an external system that your dev/test environment has no or limited access to.
As an example, let's say you are hitting a webapi to CREATE a claim and the response is the new claim number. But the api does not expose methods to GET a claim, and you need to verify some other data that was created when you made the webapi call. In this case, you could use the outputted claim numbers to manually check the remote system.
FWIW

How to unit-test a NextPasswordChangeDate function against the Active Directory

I am working on a project using the Active Directory, intensively. I set up a few unit tests for several things against the AD, some of which I achieve using mocked objects, some which I achieve through real calls against the AD.
As one of the functions of my project, I have to retrieve a so called "user profile". This user profile consists mostly of simple attributes, like "cn", "company", "employeeid", etc. However, one property that I am trying to fill is not a simple one "NextPasswordChangeDate".
To the best of my knowledge, the only way to get this, is by getting the domain policy's maxPwdAge and use this information together with pwdLastSet.
Now my question: How can I unit test this in an intelligent way? I came up with three options, all of which are not great:
Use my own account as the searched account, find out the date by other means and hard code it in the unit test. By this way, I can unit test my code well, but every month, I have to change the unit test, because I changed my password.
Use some account that has password never expires set. This is kind of pointless, because I cannot really test the correctness of my code by that.
Use a mock object and make sure that the correct API calls happen. This option allows to test the correctness of the function's behaviour, but then the tested logic is in fact in the unit test and hence I cannot be sure, that it is doing the right thing, even if the test is passed.
Which of the three do you suggest? Or maybe you have a better option?
Since 1 and 2 the rely on AD existing and having known values seem more like integration tests to me.
I generally take the side that any non-deterministic behavior should be interfaced out and mocked if possible (#3). As you noted this will always leave an amount of real implementation code that is not unit-testable, but would then be covered by your integration tests running against a known AD system.
Related Question/Answer