Test Result Report in Jenkins Pipeline using Groovy | Pipeline Execution | Report Types - unit-testing

I am setting up test result reporting in Jenkins for test projects written in various test frameworks (NUnit, MSTest, etc) and would like to improve my understanding with regards to report types and the difference between stages and post in the pipeline execution.

Pipeline Execution
Stages are executed in the order in which they appear and if there are any stages after and the one before fails the following stages will not get executed.
Where post gets executed regardless of whether or not the stages completed successfully or not, after the stages execution.
Report Types
Provided I have a stage (produces test result):
stage('MSTest') {
steps {
bat(script: 'dotnet test "..\\TestsProject.csproj" --logger "trx;LogFileName=TestResult.xml"')
}
}
And a post that runs always (consume test result to produce test result report):
post {
always {
xunit testTimeMargin: '5000', thresholdMode: 1, thresholds: [], tools: [ReportType(deleteOutputFiles: true, failIfNotNew: false, pattern: '..\\TestResult.xml', skipNoTestFiles: false, stopProcessingIfError: false)]
}
}
Project variations:
Provided my test project is written in NUnit the 'ReportType' method in 'tools:' will need to be replaced with NUnit3 for the post to execute successfully.
Provided my test project is written in MSTest the 'ReportType' method in 'tools:' will need to be replaced with MSTest for the post to execute successfully.

Related

GoLand not showing individual test results after test run

I am trying to fix some unit tests in GoLand using the (I believe) standard "testing" package in Go, but I'm having trouble figuring out which test is failing. After I run the tests, there is nothing shown in the test results dropdown, it is just empty (see below).
I wrote a dummy empty test that just prints "here" to test if it worked on just a simple test, and even then I get no test results in the explorer. The test passes and prints the expected output.
func Test_ResultsShow(t *testing.T) {
println("here")
}
=== RUN Test_ResultsShow
here
--- PASS: Test_ResultsShow (0.00s)
PASS
Process finished with the exit code 0
Additionally, when I try to run my larger suite of tests, the number of passed (24) and failed (1) tests don't add up to the total number of tests indicated (26). I see no indication of any test failure in the test output either, and I've run all the tests individually to see which test is failing, but all of them succeeded.
The blacked out section below is covers the repository name. But the individual test names are not shown below it (though they are confirmed to run by the output).

Gradle : Multiple configurations for Test tasks

I have got two types of tests in my app as follows:
Unit tests (large number of tests and quick to execute)
Integration tests (small number of tests but each suite takes considerable time)
My project uses gradle and I want both sets of tests to execute concurrently. As per gradle's documentation, I can use maxParallelForks config to parallelize the execution. However, as gradle distributes tasks to workers statistically (see here) there is a chance that all my integration tests get allocated to the same worker.
So, what I really want is to have two sets of test blocks in my gradle file, e.g.:
test {
include 'org/unit/**'
exclude 'org/integration/**'
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
test {
include 'org/integration/**'
exclude 'org/unit/**'
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
Does gradle support two different test profiles like the above? If yes, can I execute those two in parallel?
I am assuming you have them all under the same source set: src/main/java/test.
I'd suggest creating a separate source set and task specifically for integration tests. See Configuring integration tests.
And since you want to parallelize the execution, then you'll need to create a custom task that submits both your unit and integration test tasks to the Worker API: https://guides.gradle.org/using-the-worker-api/
Starting from Gradle 7.4 the The built-in JVM Test Suite Plugin is the way to go:
testing {
suites {
test {
useJUnitJupiter()
}
integrationTest(JvmTestSuite) {
dependencies {
implementation project
// other integration-test-specific dependencies
}
targets {
all {
testTask.configure {
shouldRunAfter(test)
maxParallelForks = Runtime.runtime.availableProcessors().intdiv(2) ?: 1
}
}
}
}
}
}

TFS Build servers and critical Unit Tests

When you build on a TFS build server, failed unit tests cause the build to show an orange alert state but they still "succeed". Is there any way to tag a unit test as critical such that if it fails, the whole build will fail?
I've Googled for it and didn't find anything, and I don't see any attribute in the framework, so I'm guessing the answer is no. But maybe I'm just looking in the wrong place.
There is a way to do this, but you need to create multiple test runs and then filter your tests. On your tests, set a TestCategory attribute:
[TestCategory("Critical")]
[TestMethod]
public void MyCriticalTest {}
For NUnit you should be able to use [Category("Critical")]. There are multiple attributes of a test you can filter on, including the name.
Name = TestMethodDisplayNameName
FullyQualifiedName = FullyQualifiedTestMethodName
Priority = PriorityAttributeValue
TestCategory = TestCategoryAttributeValue
ClassName = ClassName
And these operators:
= (equals)
!= (not equals)
~ (contains or substring only for string values)
& (and)
| (or)
( ) (paranthesis for grouping)
XUnit .NET currently does not support TestCaseFilters.
Then in your build definition you can create two test runs, one that runs Critical tests, one that runs everything else. You can use the Filter option of the Test Run.
Open the Test Runs window using this hard to find button:
Create 2 test runs:
On your first run set the options as follows:
On your second run set the options as follows:
This way Team Build will run any test with the "Ciritical" category in the first run and will fail. If the first run succeeds it will kick off the non-critical tests and will Partially Succeed, even when a test fails.
Update
The same process explained for Azure DevOps Pipelines.
Yes.
Using the TFS2013 Default Template:
Under the "Process" tab, go to section 2, "Basic".
Expand the Automated Tests section.
For "Test Source", click the ellipsis ("...").
This will open a new window that has a "Fail build when tests fail" check box.

Command for karma-jasmine to stop unit-test after first fail

Is there any command for karma-jasmine unit-test to stop the test when it encounters the first test fail. For example, in python the command is like:
py.test -x # stop after first failure
py.test --maxfail=2 # stop after two failures
Currently I am using node_modules/karma/bin/karma start that run all the tests and stops only after everything is executed
This would require creating a custom reporter, or changing the reporter in the karma-jasmine adapter to stop on spec failure as such:
this.specDone = function (specResult)
{
var failure = specResult.failedExpectations.length;
if (failure)
{
suiteDone();
jasmineDone();
}
}
References
jasmine.io: custom_reporter.js
karma-jasmine source: adapter.js
Jasmine Issue #842: Async reporter hooks
Protractor Issue #1938: Find a good pattern for waiting for Jasmine Reporters
Alternatively you can just tell jasmine you want to run a specific Spec or Specs in a folder so you only are testing a subset of your tests and not running all in your suite.

Running NUnit tests multiple times

I have a suite of NUnit tests, some of which fail intermittently, probably because of timing problems. I'd like to find these flaky unit tests. Is there a way to repeat each test multiple times without having to put a Repeat() attribute on each test? We routinely use the resharper and ncrunch runners, but also have access to the nunit gui and console runners.
NUnit 3
In NUnit 3, you may use Retry attribute:
RetryAttribute is used on a test method to specify that it should be
rerun if it fails, up to a maximum number of times.
Notes:
It is not currently possible to use RetryAttribute on a TestFixture or any other type of test suite. Only single tests may be
repeated.
If a test has an unexpected exception, an error result is returned and it is not retried. Only assertion failures can trigger a retry. To
convert an unexpected exception into an assertion failure, see the
ThrowsConstraint.
NUnit 2
NUnit 2 doesn't support retries, but you may use NUnit-retry plug-in (NuGet, GitHub). An example of use:
private static int run = 0;
...
[Test]
[Retry(Times = 3, RequiredPassCount = 2)]
public void One_Failure_On_Three_Should_Pass()
{
run++;
if (run == 1)
{
Assert.Fail();
}
Assert.Pass();
}
See also
Feature - Add 'Retry Attribute' to repeat test upon failure. Discussion about the feature on Launchpad