Is it possible to get a report of unit tests run in TFS builds, grouped by solution? - unit-testing

We have a few thousand native and .NET unit tests. In Visual Studio 2012, I can run and see the results, grouped by the C++/C# project.
I'd like to get something like this view, preferably grouped by solution (product) and then project (.dll), to the business people. At the bare minimum I'd like to at least have number of tests run and failed per solution.
Is there any proper way to do this with TFS?
I've looked everywhere and keep running into walls,
TFS build test results don't seem to store any information about the test categories, so I can't use those to group by solution
.vsmdi lists and .testsettings files have been phased out in VS 2012 and TFS 2012. We had separate lists for each solution before...now it's just *test*.dll
Test Plans and Custom SSRS reports seem to be completely useless for this much granularity of test results (why?). TfsTestWarehouse has barely anything - just enough for total tests passed/failed per build.
Parsing TRX files and writing HTML reports seems to work best using tools like trx2html, but I still can't run tests by solution.

TRX files are just XMLs, there's no need to parse them. You can write an XSLT transformation to present the data in the format you need. A nice thing about XSLT is that it has built-in aggregation, grouping, sorting etc capabilities.
In case TRX files themselves do not contain solution information (which is likely), then you'll have to do a two-stage report generation: prepare the data, generate the report.
The preparation would be a relatively simple command line tool, which would go over your sln files and build a map of which projects belong to while solutions (search the web, i bet there're already a bunch of scripts for that).
And the generation part would be using that mapping as an argument to the transformation and report generation to properly aggregate the data.
I know, it's a bit of a generic response, but hope it helps at least a bit.

I ended up solving this by adding the Project and Solution information in a custom Assembly Attribute (i.e. to the test .dll) at build time, through a custom MSBuild task. Here are roughly the steps I followed (from memory).
First, I created the custom Attribute:
[AttributeUsage(AttributeTargets.Assembly)]
public class ProjectAttribute: Attribute {
public string Project { get; set; }
public string Solution { get; set; }
public ProjectAttribute(string project, string solution)
{
this.Project = project;
this.Solution = solution;
}
}
This custom attribute was defined in an Assembly that was referenced by all unit test projects.
I then created a very simple/rudimentary inline MSBuild task, CreateProjectAttribCs that would dynamically create an extra C# file with one line. Something like:
[assembly: ProjectAttribute(Project="$(ProjectName)") Solution="$(Solution)"]
And I then added this file to the <Compile> Item Group, inside a custom MSBuild target called before Compile (again, just going from memory):
<Target Name="CreateProjectAttribCs" BeforeTargets="Compile">
<CreateProjectAttribCs File="ProjectAttribute.cs" />
<ItemGroup>
<Compile Include="ProjectAttribute.cs" />
</ItemGroup>
</Target>
<Target Name="CleanupProjectAttribCs" AfterTargets="Compile>
<Delete Files="ProjectAttribute.cs" />
</Target>
For C++ projects I'd added the Project and Solution info to a String Table resource "injected" in a similar way to the ProjectAttrib.cs file.
One major annoyance with all of this was that developers would have to add this custom MSBuild .targets file (which would contain the custom targets and the assembly reference) by editing the .csproj or .vcxproj.
To circumvent that a bit, I also created a custom Visual Studio Project Template for our team's unit tests so that everything was already added and my fellow devs would never have to see the innards of an MSBuild project.
The hardest part was adding the Project/Solution info. Once I had that it was easy to read the custom attributes on the test assemblies or String Table resource in a native .dll, and add the info to data parsed/transformed from the test results to a custom test result database and report.

Related

msbuild - tag propertygroup has attribute Label but not documented

I need to manually modify my .vcxproj and I'm trying to understand the MSbuild schema using the documentation.
In my existing .vcxproj, I have the tag <PropertyGroup Label="Globals"> but in the documentation there is no mention of the Label attribute.
This is for an existing Visual Studio C++ project and there's no error when I launch it.
What does the Label attribute do?
It is nowhere fully documented; the Target element documentation mentions it, but it has just
Optional attribute.
An identifier that can identify or order system and user elements.
A quick glance at the source code also reveals it is not actively used by the build system itself: it's just there, you can assign values to it and get them back, that's it. As such it can serve as a means of adding a description to the xml (instead of using a comment). This description can also be retrieved programmatically by the build system. Which is the only use I have actually seen by a tool, namely Visual Studio: as you figured it generates project files which contains some labels. VS uses these to determine where to find/insert code produced by it's user interface. Good example is the PropertySheets label: it's just an ImportGroup, you can have an arbitrary amount of those, but only the ImportGroup with the label PropertySheets will be displayed and modified by the Property Manager in VS. Likewise for the ProjectConfigurations ItemGroup, the Globals PropertyGroup, the Configuration Items etc.

How to test ILOG JRules Ruleset without using DVS?

I'm trying to use JRules BRMS 7.1 for a project. And I found out that DVS has some limitation in testing Ruleset.
It is that it cannot test the content in collections of complex type in Excel scenario file templates.
But I understand it is normal as that kind of content is too complex for an Excel table format.
So anyone has any idea what is the best way to test a ruleset that need tons of test cases with lots of complex type input without using DVS?
If developers are doing the testing, then use JUnit with an embedded rule engine. If non-technical users need to perform testing, it may be simplest to upgrade to WODM 7.5 which does not have this limitation. If that is not an option, then it is possible to use JRules 7.1 DVS, but it is somewhat complex and involves creating a separate wrapper rule project that takes the output collections as input and in its XOM, performs the comparison with the actual results.
Raj Rao is correct, you can use array as expected results (input is easy) but you will have to use hidden JRules API and it is painful anyway.
JUnit or 7.5 is the answer.
Unless you want to pay IBM to do it, even so they may say it is not possible because it is not detailled anywhere :(
Cheers
PS: BTW, arrays of complex types as input is easy for sure and well documented, I think.
If you have deployed your rules as a HTDS service to RES, then you could use SoapUI to test the HTDS web service.
SoapUI allows you to set up test cases that can be used to test different scenarios.
To validate the rules using Decision Validation Services, you create an Excel scenario file template that you populate with scenarios to test.
Before generating the Excel scenario file template, you must check that your project does not contain any errors or warnings that could prevent the generation of the Excel file.
step1:in your rule explorer select your project in rule project enable the dvs part click check point and make sure that you don't have any errors.
2:create scenario file click next give the name for test project name.xls.
3:pass the values in scenario and expected results in expected results column.
4:you can test multiple scenarios at a time.
5:now close and save the excel file.
6:run configuration right click dvs excel file give any name for test
7:in excel file field click browse and select xls file
8.in rule project field select your rule project
9:in HTML report field select your project and click OK.
10:click apply and run
11:in rule studio right click on your project and click refresh
12:the HTML file will be generated in project.
13:right click and open with web browser and observe the result of your scenarios.
14:you have successfully enabled dvs

Excluding tests from tfs build

I want to exclude some tests from my continuous integration build but I haven't found a way to do so.
One of the things I've tried was to set up the priority of those tests to -2 and then on the build I specified Minimum Test Priority = -1 but it still run those tests.
Any help would be greatly appreciated.
Instead of using "Test Lists" that have been described, you should use the "Test Category" method. The test lists & VSMDI functionality have actually been deprecated in Visual Studio 2010 and Microsoft may remove the feature completely in a future version of Visual Studio.
If you'd like some more information about how to use test categories especially with your automated build process, check out this blog post: http://www.edsquared.com/2009/09/25/Test+Categories+And+Running+A+Subset+Of+Tests+In+Team+Foundation+Server+2010.aspx
You can also exclude test categories from running by specifying the ! (exclamation point) character in front of the category name to further define your filter.
If you are using MSTest you can create a Test List for the tests that you need in you continuous integration.
With MSTest, you can simply create two test projects (assemblies) and only specify one in the build config to use for testing. In MSBuild, this was the way to go. For the new WF-Based build definitions, I currently don't have a sample at hand:
<ItemGroup>
<!-- TEST ARGUMENTS
If the RunTest property is set to true then the following test arguments will be used to run
tests. Tests can be run by specifying one or more test lists and/or one or more test containers.
To run tests using test lists, add MetaDataFile items and associated TestLists here. Paths can
be server paths or local paths, but server paths relative to the location of this file are highly
recommended:
<MetaDataFile Include="$(BuildProjectFolderPath)/HelloWorld/HelloWorld.vsmdi">
<TestList>BVT1;BVT2</TestList>
</MetaDataFile>
To run tests using test containers, add TestContainer items here:
<TestContainer Include="$(OutDir)\AutomatedBuildTests.dll" />
<TestContainer Include="$(SolutionRoot)\TestProject\WebTest1.webtest" />
<TestContainer Include="$(SolutionRoot)\TestProject\LoadTest1.loadtest" />
Use %2a instead of * and %3f instead of ? to prevent expansion before test assemblies are built
-->
</ItemGroup>
<PropertyGroup>
<RunConfigFile>$(SolutionRoot)\LocalTestRun.testrunconfig</RunConfigFile>
</PropertyGroup>
Tip: To use a generic build definition, we name all our Test projects "AutomatedBuildTests", i.e. there is no solution difference. So the build definition can be included in any existing build definition (or even be a common one) that always executes the right set of tests. It would be an easy task to prepend an "if exists" check in order to allow a build definition to only run tests when a Test assembly is present. We do not use this in order to get build errors when no test assembly is found as we absolutely want test with all those builds that use this definition.
My preference would be as above using a Test List, but some people have issued merging/editing the vsmdi files... We end up with separate solutions and use a pattern match to execute all tests in the appropriate DLL.
In Visual Studio 2012 and later you can configure your build definition using the Test case filter setting.
This setting is part of your build definition.
Open the build definition and navigate to the Process tab. In the section 3. Test you can define mutiple test sources. For each test source your can specify a Test case filter.
You can find the details in this MSDN article: Running selective unit tests in VS 2012 RC using TestCaseFilter
I have copied the supported operators and some examples from this article:
Operators supported in RC are:
1.= (equals)
2.!= (not equals)
3.~ (contains or substring only for string values)
4.& (and)
5.| (or)
6.( ) (paranthesis for grouping)
Expresssion can be created using these operators as any valid logical condition. & (and) has higher
precedence over | (or) while evaluating expression.
E.g.
"TestCategory=NAR|Priority=1"
"Owner=vikram&TestCategory!=UI"
"FullyQualifiedName~NameSpace.Class"
"(TestCategory!=UI&(Priority=1|Priority=2))|(TestCategory=UI&Priority=1)"
Another possibility would be to have some test sources in one build definition in some (i.e. more or fewer) test sources in other build definitions.

How would I produce JUnit test report for groovy tests, suitable for consumption by Jenkins/Hudson?

I've written several XMLUnit tests (that fit in to the JUnit framework) in groovy and can execute them easily on the command line as per the groovy doco but I don't quite understand what else I've got to do for it to produce the xml output that is needed by Jenkins/Hudson (or other) to display the pass/fail results (like this) and detailed report of the errors etc (like this). (apologies to image owners)
Currently, my kickoff script is this:
def allSuite = new TestSuite('The XSL Tests')
//looking in package xsltests.rail.*
allSuite.addTest(AllTestSuite.suite("xsltests/rail", "*Tests.groovy"))
junit.textui.TestRunner.run(allSuite)
and this produces something like this:
Running all XSL Tests...
....
Time: 4.141
OK (4 tests)
How can I make this create a JUnit test report xml file suitable to be read by Jenkins/Hudson?
Do I need to kick off the tests with a different JUnit runner?
I have seen this answer but would like to avoid having to write my own test report output.
After a little hackage I have taken Eric Wendelin's suggestion and gone with Gradle.
To do this I have moved my groovy unit tests into the requisite directory structure src/test/groovy/, with the supporting resources (input and expected output XML files) going into the /src/test/resources/ directory.
All required libraries have been configured in the build.gradle file, as described (in its entirety) here:
apply plugin: 'groovy'
repositories {
mavenCentral()
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.+'
groovy module('org.codehaus.groovy:groovy:1.8.2') {
dependency('asm:asm:3.3.1')
dependency('antlr:antlr:2.7.7')
dependency('xmlunit:xmlunit:1.3')
dependency('xalan:serializer:2.7.1')
dependency('xalan:xalan:2.7.1')
dependency('org.bluestemsoftware.open.maven.tparty:xerces-impl:2.9.0')
dependency('xml-apis:xml-apis:2.0.2')
}
}
test {
jvmArgs '-Xms64m', '-Xmx512m', '-XX:MaxPermSize=128m'
testLogging.showStandardStreams = true //not sure about this one, was in official user guide
outputs.upToDateWhen { false } //makes it run every time even when Gradle thinks it is "Up-To-Date"
}
This applies the Groovy plugin, sets up to use maven to grab the specified dependencies and then adds some extra values to the built-in "test" task.
One extra thing in there is the last line which makes Gradle run all of my tests every time and not just the ones it thinks are new/changed, this makes Jenkins play nicely.
I also created a gradle.properties file to get through the corporate proxy/firewall etc:
systemProp.http.proxyHost=10.xxx.xxx.xxx
systemProp.http.proxyPort=8080
systemProp.http.proxyUser=username
systemProp.http.proxyPassword=passwd
With this, I've created a 'free-style' project in Jenkins that polls our Mercurial repo periodically and whenever anyone commits an updated XSL to the repo all the tests will be run.
One of my original goals was being able to produce the standard Jenkins/Hudson pass/fail graphics and the JUnit reports, which is a success: Pass/Fail with JUnit Reports.
I hope this helps someone else with similar requirements.
I find the fastest way to bootstrap this stuff is with Gradle:
# build.gradle
apply plugin: 'groovy'
task initProjectStructure () << {
project.sourceSets.all*.allSource.sourceTrees.srcDirs.flatten().each { dir ->
dir.mkdirs()
}
}
Then run gradle initProjectStructure and move your source into src/main/groovy and tests to test/main/groovy.
It seems like a lot (really it's <5 minutes of work), but you get lots of stuff for free. Now you can run gradle test and it'll run your tests and produce JUnit XML you can use in build/test-reports in your project directory.
Since you're asking for the purposes of exposing the report to Jenkins/Hudson, I'm assuming you have a Maven/Ant/etc build that you're able to run. If that's true, the solution is simple.
First of all, there's practically no difference between Groovy and Java JUnit tests. So, all you need to do is add the Ant/Maven junit task/plugin to your build and have it execute your Groovy junit tests (just as you'd do if they were written in Java). That execution will create test reports. From there, you can simply configure your Hudson/Jenkins build to look at the directory where the test reports get created during the build process.
You can write your own custom RunListener (or SuiteRunListener). It still requires you to write some code, but it's much cleaner than the script you've provided a link to. If you'd like, I can send you the code for a JUnit reporter I've written in JavaScript for Jasmine and you can 'translate' it into Groovy.

VS2010 and Create Unit Tests... no tests generated

I'm trying to add some unit tests to an existing code base using Visual Studio 2010's unit test generator. However, in some cases when I open a class, right click --> Create Unit Tests..., after I select the methods to generate tests for it will create what is essentially a blank test. Are there situations where this can happen? In every case I select at least one public method to gen tests for, and all it generates is this:
using TxRP.Controllers; //The location of the code to be tested
using Microsoft.VisualStudio.TestTools.UnitTesting;
That's it. Nothing else. Strange, right?
I should note that this is all MVC 2 controller code, and I have been able to gen tests for other controllers with no problem, and all my controllers follow pretty much the same format. No error seems to be thrown, as it gens the empty page happily and adds it to the project as if everything is just fine.
Has anyone had experience with the same type of thing happening, and was there any answer found as to why?
UPDATE:
There is in fact an error during generation:
While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
After some research, the only possible solution I found is that this error occurrs if you're trying to generate tests to a test file that already exists. However, this solution is not working for me...
If you try to generate tests for a class which already has existing tests in another file in the project, it will just generate an empty file as described above. Changing the filename is not sufficient, nor is using a different location within the project. Basically it seems to enforce the one-testfile-per-class convention across the entire project.
This problem is caused by the previously generated test file having been moved to a folder other than the root folder in the test project.
Resolution
Move the test file into the test project root folder.
Generate the new tests
Move the test file back to the folder location you want in the test project.
I have no clue why they dont call it a BUG! in a typical enterprise level software development it is more than a coincidence where multiple people generate unit tests for different methods of the same class # different points of time.
We always end up with this error and it is not helping us any way! Feels as if the Context Menu "Create Unit Tests" has lil use!
Error description:
"While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
"