How would I produce JUnit test report for groovy tests, suitable for consumption by Jenkins/Hudson? - unit-testing

I've written several XMLUnit tests (that fit in to the JUnit framework) in groovy and can execute them easily on the command line as per the groovy doco but I don't quite understand what else I've got to do for it to produce the xml output that is needed by Jenkins/Hudson (or other) to display the pass/fail results (like this) and detailed report of the errors etc (like this). (apologies to image owners)
Currently, my kickoff script is this:
def allSuite = new TestSuite('The XSL Tests')
//looking in package xsltests.rail.*
allSuite.addTest(AllTestSuite.suite("xsltests/rail", "*Tests.groovy"))
junit.textui.TestRunner.run(allSuite)
and this produces something like this:
Running all XSL Tests...
....
Time: 4.141
OK (4 tests)
How can I make this create a JUnit test report xml file suitable to be read by Jenkins/Hudson?
Do I need to kick off the tests with a different JUnit runner?
I have seen this answer but would like to avoid having to write my own test report output.

After a little hackage I have taken Eric Wendelin's suggestion and gone with Gradle.
To do this I have moved my groovy unit tests into the requisite directory structure src/test/groovy/, with the supporting resources (input and expected output XML files) going into the /src/test/resources/ directory.
All required libraries have been configured in the build.gradle file, as described (in its entirety) here:
apply plugin: 'groovy'
repositories {
mavenCentral()
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.+'
groovy module('org.codehaus.groovy:groovy:1.8.2') {
dependency('asm:asm:3.3.1')
dependency('antlr:antlr:2.7.7')
dependency('xmlunit:xmlunit:1.3')
dependency('xalan:serializer:2.7.1')
dependency('xalan:xalan:2.7.1')
dependency('org.bluestemsoftware.open.maven.tparty:xerces-impl:2.9.0')
dependency('xml-apis:xml-apis:2.0.2')
}
}
test {
jvmArgs '-Xms64m', '-Xmx512m', '-XX:MaxPermSize=128m'
testLogging.showStandardStreams = true //not sure about this one, was in official user guide
outputs.upToDateWhen { false } //makes it run every time even when Gradle thinks it is "Up-To-Date"
}
This applies the Groovy plugin, sets up to use maven to grab the specified dependencies and then adds some extra values to the built-in "test" task.
One extra thing in there is the last line which makes Gradle run all of my tests every time and not just the ones it thinks are new/changed, this makes Jenkins play nicely.
I also created a gradle.properties file to get through the corporate proxy/firewall etc:
systemProp.http.proxyHost=10.xxx.xxx.xxx
systemProp.http.proxyPort=8080
systemProp.http.proxyUser=username
systemProp.http.proxyPassword=passwd
With this, I've created a 'free-style' project in Jenkins that polls our Mercurial repo periodically and whenever anyone commits an updated XSL to the repo all the tests will be run.
One of my original goals was being able to produce the standard Jenkins/Hudson pass/fail graphics and the JUnit reports, which is a success: Pass/Fail with JUnit Reports.
I hope this helps someone else with similar requirements.

I find the fastest way to bootstrap this stuff is with Gradle:
# build.gradle
apply plugin: 'groovy'
task initProjectStructure () << {
project.sourceSets.all*.allSource.sourceTrees.srcDirs.flatten().each { dir ->
dir.mkdirs()
}
}
Then run gradle initProjectStructure and move your source into src/main/groovy and tests to test/main/groovy.
It seems like a lot (really it's <5 minutes of work), but you get lots of stuff for free. Now you can run gradle test and it'll run your tests and produce JUnit XML you can use in build/test-reports in your project directory.

Since you're asking for the purposes of exposing the report to Jenkins/Hudson, I'm assuming you have a Maven/Ant/etc build that you're able to run. If that's true, the solution is simple.
First of all, there's practically no difference between Groovy and Java JUnit tests. So, all you need to do is add the Ant/Maven junit task/plugin to your build and have it execute your Groovy junit tests (just as you'd do if they were written in Java). That execution will create test reports. From there, you can simply configure your Hudson/Jenkins build to look at the directory where the test reports get created during the build process.

You can write your own custom RunListener (or SuiteRunListener). It still requires you to write some code, but it's much cleaner than the script you've provided a link to. If you'd like, I can send you the code for a JUnit reporter I've written in JavaScript for Jasmine and you can 'translate' it into Groovy.

Related

(Google Test) Automatically retry a test if it failed the first time

Our team uses Google Test for automated testing. Most of our tests pass consistently, but a few seem to fail ~5% of the time due to race conditions, network time-outs, etc.
We would like the ability to mark certain tests as "flaky". A flaky test would be automatically re-run if it fails the first time, and will only fail the test suite if it fails both times.
Is this something Google Test offers out-of-the-box? If not, is it something that can be built on top of Google Test?
You have several options:
Use --gtest_repeat for the test executable:
The --gtest_repeat flag allows you to repeat all (or selected) test methods in a program many times. Hopefully, a flaky test will eventually fail and give you a chance to debug.
You can mimic tagging your tests by adding "flaky" somewhere in their names, and then use the gtest_filter option to repeat them. Below are some examples from Google documentation:
$ foo_test --gtest_repeat=1000
Repeat foo_test 1000 times and don't stop at failures.
$ foo_test --gtest_repeat=-1
A negative count means repeating forever.
$ foo_test --gtest_repeat=1000 --gtest_break_on_failure
Repeat foo_test 1000 times, stopping at the first failure. This
is especially useful when running under a debugger: when the test
fails, it will drop into the debugger and you can then inspect
variables and stacks.
$ foo_test --gtest_repeat=1000 --gtest_filter=Flaky.*
Repeat the tests whose name matches the filter 1000 times.
See here for more info.
Use bazel to build and run your tests:
Rather than tagging your tests in the test files, you can tag them in the bazel BUILD files.
You can tag each test individually using cc_test rule.
You can also define a set of tests (using test_suite) in the BUILD file and tag them together (e.g. "small", "large", "flaky", etc). See here for an example.
Once you tag your tests, you can use simple commands like this:
% bazel test --test_tag_filters=performance,stress,-flaky //myproject:all
The above command will test all tests in myproject that are tagged as performance,stress, and are not flaky.
See here for documentation.
Using Bazel is probably cleaner because you don't have to modify your test files, and you can quickly modify your tests tags if things change.
See this repo and this video for examples of running tests using bazel.

Why test assemblies are not filtering in VSTS azure build pipeline despite putting test assembly patterns?

Here is my test assembly patterns (configuration)
**\$(BuildConfiguration)\*test*.dll
!**\obj\**
!**\$(BuildConfiguration)\*Integration*
After triggering build, here is the log where integration test assembly is also there (this file must be filtered and should be here)
2019-04-23T13:10:33.6689787Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Test\bin\Release\myapp.Services.Test.dll
2019-04-23T13:10:33.6690018Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Integration.Test\bin\Release\myapp.Services.Integration.Test.dll
Becuase of this integration test cases are also running and I want to run only unit test cases.
Any idea?
I've found the solution, here is my latest configuration for the same which working absolutely as expected now.
**\$(BuildConfiguration)\*test*.dll
!**\obj\**
!**\myapp\*Integration*\**
!**\*Microsoft.Owin.Testing.dll*
!**\$(BuildConfiguration)\*Integration.Test*.dll
!**\$(BuildConfiguration)\*Microsoft.VisualStudio.TestPlatform*
!**\$(BuildConfiguration)\*MSTest*
!**\$(BuildConfiguration)\*Microsoft.Owin.Testing.dll*
!**\$(BuildConfiguration)\*Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll*
If you notice the line which says that exclude path which contains this pattern;
!**\myapp\*Integration*\**
and below pattern matches and will not be included in the result.
2019-04-23T13:10:33.6690018Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Integration.Test\bin\Release\myapp.Services.Integration.Test.dll

SonarQube: see details of failed tests

In SonarQube 5.6.6, I can see on http://example.com/component_measures/metric/test_failures/list?id=myproject that my unit test results were successfully imported. This is indicated by
Unit Test Failures: 1
which I produced by a fake failing test.
I also see the filename of the failing test class in a long list, and I see the number of failed tests (again: 1).
But I can't find any more information: which method, stack trace, stdout/err, just everything which is also included in the build/reports/test/index.html files generated by gradle? Clicking to the list entry points me to the code and coverage view, but I can't find any indicator, which test failed.
Am I doing something wrong in the frontend, is it a configuration problem, or am I looking for a feature which doesn't exist in SonarQube?
This is how it looks currently:
http://example.com/component_measures/domain/Coverage: Here I see that one test failed:
http://example.com/component_measures/metric/test_success_density/list: I can see which file it is:
But clicking on the line above only points me to the source file. Below the test which "failed". There is no indication that this test failed. And I can't find any way to see stack trace or the name of the failed test method:
Btw: The page of the first screenshot show Information about unit tests. But if the failing test is an integration test, I don't even see these numbers
Update
Something like this is probably what I'm looking for:
(found on https://deors.wordpress.com/2014/07/04/individual-test-coverage-sonarqube-jacoco/)
I never saw such a view on my installation, don't know how to get it and if it is implemented in the current version.
Unfortunately Test execution details is a deprecated feature Sonar Qube 5.6
If you install older version such as Sonar Qube 4.x, we will get following screen which provides test case result details.
But this screen itself has been removed.
Ref # https://jira.sonarsource.com/browse/SONARCS-657
Basically the issue is Unit test case details report requires links back to source code files. But now the unit test cases are only linked to assemblies.

Gradle project with only configuration and no sources

I'd like to create a new Gradle project without any sources. I'm going to put there some configuration files and I want to generate a zip file when I build.
With maven I'd use the assembly plugin. I'm looking for the easiest and lightest way to do this with Gradle. I wonder if I need to apply the java plugin even if I don't have any sources here, just because it provides some basic and useful tasks like clean, assemble and so on. Generating a zip is pretty straightforward, I know how to do that, but I don't know where and how to put the zip generation within the gradle world.
I've done it manually until now. In other words, for projects where all I want to do is create some kind of distro and I need the basic lifecycle tasks like assemble and clean, I've simply created those tasks along with the needed dependencies.
But there is the 'base' plugin (mentioned under "Base plugins" of the "Standard Gradle Plugins" in the user's guide) that seems to fit the bill nicely for this functionality. Note though that the user guide mentions that this and the other base plugins are not yet considered part of the Gradle API and are not really documented.
The results are pretty much identical to yours, the only difference being that there are no confusing java specific tasks that always remain UP-TO-DATE.
apply plugin: 'base'
task dist(type: Zip) {
from('solr')
into('solr')
}
assemble.dependsOn(dist)
Sample run:
$ gradle clean assemble
:clean
:dist
:assemble
BUILD SUCCESSFUL
Total time: 2.562 secs
As far as I understood, it might sound strange but looks like I need to apply the java plugin in order to create a zip file. Furthermore it's handy to have available some common tasks like for example clean. The following is my build.gradle:
apply plugin: 'java'
task('dist', type: Zip) {
from('solr')
into('solr')
}
assemble.dependsOn dist
I applied the java plugin and defined my dist task which creates a zip file containing a solr directory with the content of the solr directory within my project. The last line is handy to have the task executed when I run the common gradle build or gradle assemble, since I don't want to explicitly call the dist task.
This way if I work with multiple projects I just need to execute gradle build on the parent to generate all the artifacts, including the configuration zip.
Please let me know if you have better solutions and add your own answer!
You could just use the groovy plugin and use ant. I did something like this. I do also like javanna's answer.
task jars(dependsOn: ['dev_jars']) << {
def fromDir = file('/database-files/non_dev').listFiles().sort()
File dist = new File("${project.buildDir}/dist")
dist.mkdir()
fromDir.each { File dir ->
File destFile = new File("${dist.absolutePath}" + "/" + "database-connection-" + dir.name + ".jar")
println destFile.getAbsolutePath()
ant.jar(destfile:destFile, update:false, baseDir:dir)
}
}

Excluding tests from tfs build

I want to exclude some tests from my continuous integration build but I haven't found a way to do so.
One of the things I've tried was to set up the priority of those tests to -2 and then on the build I specified Minimum Test Priority = -1 but it still run those tests.
Any help would be greatly appreciated.
Instead of using "Test Lists" that have been described, you should use the "Test Category" method. The test lists & VSMDI functionality have actually been deprecated in Visual Studio 2010 and Microsoft may remove the feature completely in a future version of Visual Studio.
If you'd like some more information about how to use test categories especially with your automated build process, check out this blog post: http://www.edsquared.com/2009/09/25/Test+Categories+And+Running+A+Subset+Of+Tests+In+Team+Foundation+Server+2010.aspx
You can also exclude test categories from running by specifying the ! (exclamation point) character in front of the category name to further define your filter.
If you are using MSTest you can create a Test List for the tests that you need in you continuous integration.
With MSTest, you can simply create two test projects (assemblies) and only specify one in the build config to use for testing. In MSBuild, this was the way to go. For the new WF-Based build definitions, I currently don't have a sample at hand:
<ItemGroup>
<!-- TEST ARGUMENTS
If the RunTest property is set to true then the following test arguments will be used to run
tests. Tests can be run by specifying one or more test lists and/or one or more test containers.
To run tests using test lists, add MetaDataFile items and associated TestLists here. Paths can
be server paths or local paths, but server paths relative to the location of this file are highly
recommended:
<MetaDataFile Include="$(BuildProjectFolderPath)/HelloWorld/HelloWorld.vsmdi">
<TestList>BVT1;BVT2</TestList>
</MetaDataFile>
To run tests using test containers, add TestContainer items here:
<TestContainer Include="$(OutDir)\AutomatedBuildTests.dll" />
<TestContainer Include="$(SolutionRoot)\TestProject\WebTest1.webtest" />
<TestContainer Include="$(SolutionRoot)\TestProject\LoadTest1.loadtest" />
Use %2a instead of * and %3f instead of ? to prevent expansion before test assemblies are built
-->
</ItemGroup>
<PropertyGroup>
<RunConfigFile>$(SolutionRoot)\LocalTestRun.testrunconfig</RunConfigFile>
</PropertyGroup>
Tip: To use a generic build definition, we name all our Test projects "AutomatedBuildTests", i.e. there is no solution difference. So the build definition can be included in any existing build definition (or even be a common one) that always executes the right set of tests. It would be an easy task to prepend an "if exists" check in order to allow a build definition to only run tests when a Test assembly is present. We do not use this in order to get build errors when no test assembly is found as we absolutely want test with all those builds that use this definition.
My preference would be as above using a Test List, but some people have issued merging/editing the vsmdi files... We end up with separate solutions and use a pattern match to execute all tests in the appropriate DLL.
In Visual Studio 2012 and later you can configure your build definition using the Test case filter setting.
This setting is part of your build definition.
Open the build definition and navigate to the Process tab. In the section 3. Test you can define mutiple test sources. For each test source your can specify a Test case filter.
You can find the details in this MSDN article: Running selective unit tests in VS 2012 RC using TestCaseFilter
I have copied the supported operators and some examples from this article:
Operators supported in RC are:
1.= (equals)
2.!= (not equals)
3.~ (contains or substring only for string values)
4.& (and)
5.| (or)
6.( ) (paranthesis for grouping)
Expresssion can be created using these operators as any valid logical condition. & (and) has higher
precedence over | (or) while evaluating expression.
E.g.
"TestCategory=NAR|Priority=1"
"Owner=vikram&TestCategory!=UI"
"FullyQualifiedName~NameSpace.Class"
"(TestCategory!=UI&(Priority=1|Priority=2))|(TestCategory=UI&Priority=1)"
Another possibility would be to have some test sources in one build definition in some (i.e. more or fewer) test sources in other build definitions.