No unit test success reported to sonar - unit-testing

A gradle build is creating a jacoco report which is picked up by sonar runner. After the sonar results are pushed to the sonar server, unit test coverage is displayed but unit test success is 0 even though all tests are run successfully.
Looking at the sonar runner log I see the following items reporting the source and classes locations:
15:52:36.087 INFO - Source dirs: /poc-sonar/src/main/java
15:52:36.087 INFO - Test dirs: /poc-sonar/src/test/java, /Volumes/Disk/Development/poc-sonar/src/test/groovy
15:52:36.088 INFO - Binary dirs: /poc-sonar/build/classes/main
This is raising the first question: Does Sonar have to see the compiled test classes for analysis?
Further down the log:
15:52:37.435 INFO - Sensor JaCoCoSensor...
15:52:37.445 INFO - Analysing /poc-sonar/build/jacoco/test.exec
15:52:37.546 INFO - No information about coverage per test.
15:52:37.548 INFO - Sensor JaCoCoSensor done: 113 ms
15:52:38.105 INFO - Execute decorators...
15:52:38.580 INFO - Store results in database
Jacoco has picked up the test.exec file, but reporting "No information about coverage per test"
What does that log statement mean? The sonar server actually exposes correct coverage! Is it an indicator for the missing test success reported by Sonar? What is missing for getting the unit
Unit Tests Coverage
50,0%
50,0% line coverage
Unit test success
0 tests
The full gradle build.script:
ext {
spockVersion = '0.7-groovy-2.0'
groovyVersion = '2.2.1'
}
apply plugin: 'idea'
apply plugin: 'java'
apply plugin: 'groovy'
apply plugin: 'jacoco'
apply plugin: 'sonar-runner'
group = "poc"
version = "1.0.0-SNAPSHOT"
sourceCompatibility = 1.7
targetCompatibility = 1.7
repositories {
maven {
credentials {
username "${artifactoryUsername}"
password "${artifactoryPassword}"
}
url "${artifactoryContextUrl}"
}
}
dependencies {
testCompile "junit:junit-dep:4.11"
testCompile "org.codehaus.groovy:groovy-all:$groovyVersion"
testCompile "org.spockframework:spock-core:$spockVersion"
}
tasks.withType(Test) { task ->
jacoco {
destinationFile = file("$buildDir/jacoco/${task.name}.exec")
}
}
sonarRunner {
sonarProperties {
property 'sonar.projectName', rootProject.name
property 'sonar.projectDescription', rootProject.name
// sonar server and database
property "sonar.host.url", sonarHostUrl
property "sonar.jdbc.url", sonarJdbcUrl
//property "sonar.jdbc.driverClassName", "com.mysql.jdbc.Driver"
property "sonar.jdbc.username", sonarJdbcUsername
property "sonar.jdbc.password", sonarJdbcPassword
property 'sonar.sourceEncoding', 'UTF-8'
}
}
tasks.sonarRunner.dependsOn = []

In recent Sonar versions, the Sonar property for test report location has been renamed from sonar.surefire.reportsPath to sonar.junit.reportsPath. Hence you may have to set the latter manually. For example:
apply plugin: "sonar"
subprojects {
apply plugin: "java"
sonarRunner {
sonarProperties {
property "sonar.junit.reportsPath", test.reports.junitXml.destination
}
}
}

Related

Azure Pipeline: How to save Visual Studio "Test Results" to use with other tasks in the pipeline?

I have a pipeline (classic view) with the task "Visual Studio Test", with task version "2.*".
After the task completes I can see that it prints in the log the test results.
How can I save 'Total Tests' and 'Passed Tests' in variables to use with further tasks of the pipeline?
I tried extracting the .trx file but it gets deleted after the task completes.
Performing VsTest gives me this (Some tests fail, but it's OK):
Adding trx file C:\vsts-agent-win-x64-2.165.2\_work\6\s\TestResults\TestResults\----.trx to run attachments
**************** Completed test execution *********************
Result Attachments will be stored in LogStore
Publishing test results to test run '3748'.
TestResults To Publish 189, Test run id:3748
Test results publishing 189, remaining: 0. Test run id: ---
Published test case results: 189
Result Attachments will be stored in LogStore
Run Attachments will be stored in LogStore
Received the command : Stop
TestExecutionHost.ProcessCommand. Stop Command handled
SliceFetch Aborted. Moving to the TestHostEnd phase
Please use this link to analyze the test run : https://---
Test run '---' is in 'Completed' state with 'Total Tests' : 202 and 'Passed Tests' : 19.
##[error]System.Exception: Some tests in the test run did not pass, failing the task.
##########################################################################
Finishing: VsTest - testPlan
When I try to cd into the TestResults:
+ cd C:\vsts-agent-win-x64-2.165.2\_work\6\s\TestResults\TestResults
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (C:\vsts-agent-w...lts\TestResults:String) [Set-Location], ItemNotFoundE
xception
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand
##[error]PowerShell exited with code '1'.
You can change the default test result output folder by setting the Test results folder field. See below:
Folder to store test results. When this input is not specified, results are stored in $(Agent.TempDirectory)/TestResults by default, which is cleaned at the end of a pipeline run
In above example. The test result .trx file will be stored at $(System.DefaultWorkingDirectory)\TestResults folder which will not be cleaned up.
Then you can extracting the .trx file in the following tasks and save 'Total Tests' and 'Passed Tests' in variables.
See below screenshot from my test pipeline:
Vstest task log:
Powershell task to ls the contents:
So it seems VsTest deletes all its results after the task is complete.
I solved this with a REST API command.
Make sure you convert your Personal Access Token to Base64...
Here's how I did it:
$personalToken = [your token]
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)"))
$header = #{authorization = "Basic $token"}
$params = #{
Uri = 'https://dev.azure.com/[organization]/[project]/_apis/test/runs?buildIds=[BuildId]&api-version=6.0'
Headers = $header
Method = 'GET'
}
$output = Invoke-RestMethod #params
$run = $output.value | Where-Object{$_.name -match [BuildId]}
Write-Host "Total Tests: $($run.totalTests)"
Write-Host "Passed Tests: $($run.passedTests)"
Write-Host "Failed Tests: $($run.unanalyzedTests)"
Write-Host "Skipped Tests: $($run.incompleteTests)"

Unit testing in vuejs

I am trying to configure/run my first unit test for Vuejs. But I can't get past the configuration issues. I have tried installing the libraries but for some reason I keep getting errors.
Here is what an example of my code looks like:
My directory structure:
hello/
dist/
node_modules/
src/
components/
hello.vue
test/
setup.js
test.spec.js
.babelrc
package.json
webpack.config.js
Contents inside my files
src/components/hello.vue
<template> <div> {{message}} </div> </template>
<script>
export default {
name: 'hello',
data () { return message: 'Hi' },
created () {
// ...
}
}
test/setup.js
// setup JSDOM
require('jsdom-global')()
// make expect available globally
global.expect = require('expect')
test/test.spect.js
import { shallow } from 'vue/test-utils'
import { hello} from '../../../src/components/hello.vue'
describe('hello', () => {
// just testing simple data to see if it works
expect(1).toBe(1)
})
.babelrc
{
"env": {
"development": {
"presets": [
[
"env",
{
"modules": false
}
]
]
},
"test": {
"presets": [
[
"env",
{
"modules": false,
"targets": {
"node": "current"
}
}
]
],
"plugins": [
"istanbul"
]
}
}
}
package.json
...
"scripts": {
"build": "webpack -p",
"test": "cross-env NODE_ENV=test nyc mocha-webpack --webpack-config webpack.config.js --require test/setup.js test/**/*.spec.js"
},
"devDependencies": {
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-preset-env": "^1.6.1",
"cross-env": "^5.1.1",
"css-loader": "^0.28.7",
"file-loader": "^1.1.5",
"node-sass": "^4.7.2",
"sass-loader": "^6.0.6",
"vue-loader": "^13.5.0",
"vue-template-compiler": "^2.5.9",
"webpack": "^3.10.0",
"webpack-dev-server": "^2.9.7",
"jsdom": "^11.3.0",
"jsdom-global": "^3.0.2",
"mocha": "^3.5.3",
"mocha-webpack": "^1.0.0-rc.1",
"nyc": "^11.4.1",
"expect": "^21.2.1",
"#vue/test-utils": "^1.0.0-beta.12"
},
...
"nyc": {
"include": [
"src/**/*.(js|vue)"
],
"instrument": false,
"sourceMap": false
}
and finally my webpack.config.js
...
if(process.env.NODE_ENV == "test") {
module.exports.externals = [ require ('webpack-node-externals')()]
module.exports.devtool = 'inline-cheap-module-source-map'
}
now when I run npm test from my root folder hello/ I get this error:
> hello#1.0.0 test C:\Users\john\vue-learn\hello
> npm run e2e
> hello#1.0.0 e2e C:\Users\john\vue-learn\hello
> node test/e2e/runner.js
Starting selenium server... started - PID: 12212
[Test] Test Suite
=====================
Running: default e2e tests
× Timed out while waiting for element <#app> to be present for 5000 milliseconds. - expected "visible" but got: "not found"
at Object.defaultE2eTests [as default e2e tests] (C:/Users/john/Google Drive/lab/hello/test/e2e/specs/test.js:13:8)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
FAILED: 1 assertions failed (20.281s)
_________________________________________________
TEST FAILURE: 1 assertions failed, 0 passed. (20.456s)
× test
- default e2e tests (20.281s)
Timed out while waiting for element <#app> to be present for 5000 milliseconds. - expected "visible" but got: "not found"
at Object.defaultE2eTests [as default e2e tests] (C:/Users/john/Google Drive/lab/hello/test/e2e/specs/test.js:13:8)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! hello#1.0.0 e2e: `node test/e2e/runner.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the hello#1.0.0 e2e script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\john\AppData\Roaming\npm-cache\_logs\2018-04-03T23_53_15_976Z-debug.log
npm ERR! Test failed. See above for more details.
I don't know why this happens. When I installed my webpack project at first I didn't install a testing library with the npm init command so there are no conflicts, but still I get that error:
update (after bounty)
I'm just trying to test my vuejs application. Hopefully with jasmine/karma. If anyone knows how to integrate these into a simple app and run the firsts test, I can take it from there. My problem is not writing tests but configuring it
So first thing you didn't need to enable the end to end testing in your project. I would say start fresh
$ npm install -g vue-cli
$ vue init webpack vue-testing
? Project name vue-testing
? Project description A Vue.js project
? Author Tarun Lalwani <tarun.lalwani#payu.in>
? Vue build standalone
? Install vue-router? Yes
? Use ESLint to lint your code? Yes
? Pick an ESLint preset Standard
? Set up unit tests Yes
? Pick a test runner karma
? Setup e2e tests with Nightwatch? No
? Should we run `npm install` for you after the project has been created? (recommended) yarn
Say N to Setup e2e tests with Nightwatch and use Karma for the Pick a test runner.
$ npm test
> vue-testing#1.0.0 test /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/vue-testing
> npm run unit
> vue-testing#1.0.0 unit /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/vue-testing
> cross-env BABEL_ENV=test karma start test/unit/karma.conf.js --single-run
07 04 2018 21:35:28.620:INFO [karma]: Karma v1.7.1 server started at http://0.0.0.0:9876/
07 04 2018 21:35:28.629:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
07 04 2018 21:35:28.645:INFO [launcher]: Starting browser PhantomJS
07 04 2018 21:35:32.891:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket M1HeZIiOis3eE3mLAAAA with id 44927405
HelloWorld.vue
✓ should render correct contents
PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 1 of 1 SUCCESS (0.061 secs / 0.041 secs)
TOTAL: 1 SUCCESS
=============================== Coverage summary ===============================
Statements : 100% ( 2/2 )
Branches : 100% ( 0/0 )
Functions : 100% ( 0/0 )
Lines : 100% ( 2/2 )
================================================================================
Now your npm test would work fine.
According to the error logs you provide here, the failing tests that you spot are the End to End ones. Indeed, by executing the command npm test e2e you're testing using Nightwatch. See under /tests/e2e/specs. Here you should have a default test file checking that your Vue application properly create a DOM element identified as app.
The test should be the following:
// For authoring Nightwatch tests, see
// http://nightwatchjs.org/guide#usage
module.exports = {
'default e2e tests': function (browser) {
// automatically uses dev Server port from /config.index.js
// default: http://localhost:8080
// see nightwatch.conf.js
const devServer = browser.globals.devServerURL
browser
.url(devServer)
.waitForElementVisible('#app', 5000)
.assert.elementPresent('.hello')
.assert.containsText('h1', 'Welcome to Your Vue.js App')
.assert.elementCount('img', 1)
.end()
}
}
In your case this test is failing because you have probably removed the file named App.vue that is generated through vue-cli scaffolding. The error you get is because the above test checks, with a 5 seconds timeout, if a DOM node named "app" is rendered (i.e.: .waitForElementVisible('#app', 5000)).
Basically it is failing because you actually do not provide this div in your application anymore (due of App.vue removal, maybe).
So you have two options here:
restoring the App.vue file (i.e.: create a div identified as 'app' where you mount a Vue instance);
editing the end to end according to your needs.
Hope this helps!

Grails Unit Test Exception java.lang.Exception: No tests found matching grails test target pattern filter

I am just starting to learn Grails testing and I tried to write my first grails test.For this, I created a fresh grails project and created a controller named com.rahulserver.SomeController:
package com.rahulserver
class SomeController {
def index() { }
def someAction(){
}
}
When I created this controller, grails automatically created a com.rahulserver.SomeControllerSpec under test/unit folder.
Here is my SomeControllerSpec.groovy:
package com.rahulserver
import grails.test.mixin.TestFor
import spock.lang.Specification
/**
* See the API for {#link grails.test.mixin.web.ControllerUnitTestMixin} for usage instructions
*/
#TestFor(SomeController)
class SomeControllerSpec extends Specification {
def setup() {
}
def cleanup() {
}
void testSomeAction() {
assert 1==1
}
}
When I right click this class, and run this test, I get following:
Testing started at 5:21 PM ...
|Loading Grails 2.4.3
|Configuring classpath
.
|Environment set to test
....................................
|Running without daemon...
..........................................
|Compiling 1 source files
.
|Running 1 unit test...|Running 1 unit test... 1 of 1
--Output from initializationError--
Failure: |
initializationError(org.junit.runner.manipulation.Filter)
|
java.lang.Exception: No tests found matching grails test target pattern filter from org.junit.runner.Request$1#1f0f9da5
at org.junit.internal.requests.FilterRequest.getRunner(FilterRequest.java:35)
at org.junit.runner.JUnitCore.run(JUnitCore.java:138)
No tests found matching grails test target pattern filter from org.junit.runner.Request$1#1f0f9da5
java.lang.Exception: No tests found matching grails test target pattern filter from org.junit.runner.Request$1#1f0f9da5
at org.junit.internal.requests.FilterRequest.getRunner(FilterRequest.java:35)
at org.junit.runner.JUnitCore.run(JUnitCore.java:138)
|Completed 1 unit test, 1 failed in 0m 0s
.Tests FAILED
|
- view reports in D:\115Labs\grailsunittestdemo\target\test-reports
Error |
Forked Grails VM exited with error
Process finished with exit code 1
So why is it failing?
EDIT
I am using grails 2.4.3
The unit tests are defined with Spock by default:
void testSomeAction() {
assert 1==1
}
Should be written as:
void "Test some action"() {
expect:
1==1
}
See http://spockframework.github.io/spock/docs/1.0/index.html

Grails unit test case showing Completed 0 spock test, 0 failed

I am new to Grails. I am using version 2.2.2
My test cases are not running even it says test cases passed.
I get the following message after running the test case.
Resolving [test] dependencies...
Resolving [runtime] dependencies...
| Compiling 1 source files.
| Error log4j:ERROR Property missing when configuring log4j: grails
| Error log4j:ERROR Property missing when configuring log4j: grails
| Error log4j:ERROR WARNING: Exception occured configuring log4j logging: null
| Completed 0 spock test, 0 failed in 1409ms
| Tests PASSED - view reports in D:\workspace_idea\optapp\target\test-reports
#TestFor(KpiLog)
#TestMixin(DomainClassUnitTestMixin)
#Mock(KpiLog)
class KpiLogSpec extends Specification {
void "savelog"() {
prinln "*********"
when:
def kpiLog = new KpiLog(scenarioId: 1, kpiId: 2, deltaKpi: 5)
kpiLog.save(flush: true)
then:
KpiLog.list()!= null
}
void testSaveFacebookUser(){
//given
def kpiLog = new KpiLog(scenarioId: 1, kpiId: 2, deltaKpi: 5)
//adminRole.addToPermissions("*:*")
kpiLog.save()
}
}
Can some one please tell me what is it that I am doing wrong?
I am running the test case as grails test-app -unit KpiLogSpec
Here is the log4j section from the Config.groovy file
log4j = {
// Example of changing the log pattern for the default console
// appender:
//
//appenders {
// console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n')
//}
debug 'grails.app'
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
appenders {
console name:'S', layout:pattern(conversionPattern: '%d %-5p %c - %m%n')
//rollingFile name: 'R', file:'/usr/local/jd/logs/optimizer.log',maxFileSize: '5000000KB'
rollingFile name: 'R', file:grails.config.logPath, maxFileSize: '5000000KB'
environments {
production {
appender new AWSSNSAppender(
name:'SNS' ,
topicName:config.optimizer.snsAppender.topicName,
topicSubject:config.optimizer.snsAppender.topicSubject,
awsAccessKey:config.optimizer.aws.accessKey,
awsSecretKey:config.optimizer.aws.secretKey,
threshold:Level.toLevel(optimizer.snsAppender.threshold, Level.ERROR)
)
}
}
}
info R: ['NotifierService', 'aggDataStackLog','constraintGroupLog','pageFilterLog','alertDebugLog','dacCacLog','tpFlagExportImportLog','timelog','calculationProgress','dictionaryLog','loginServiceLog', 'calculateScenarioLog','connectionLog','dataLabelServiceLog','cross-section-service','qe-basic-executor-service','qe-plan-enumerator-impl-service','qe-basic-planenum-ssservice','ct-dimension-hierarchy-service','cbRuleLog'],additivity:true
error SNS: ['aggDataStackLog','constraintGroupLog','pageFilterLog','alertDebugLog','dacCacLog','tpFlagExportImportLog','timelog','calculationProgress','dictionaryLog','loginServiceLog', 'calculateScenarioLog','connectionLog','dataLabelServiceLog','cross-section-service','qe-basic-executor-service','qe-plan-enumerator-impl-service','qe-basic-planenum-ssservice','ct-dimension-hierarchy-service','cbRuleLog'],additivity:true
//info SNS: ['aggDataStackLog','calculateScenarioLog']
/*root {
error 'R'
additivity = true
} */
}
Here is the test code which I ran.
#TestMixin(GrailsUnitTestMixin)
class FooSpec extends Specification {
def setup() {
}
def cleanup() {
}
void "test something"() {
println "****************testing in real "
assertTrue(1==1)
}
}
Run the test case like this
grails test-app unit: KpiLog
The important thing is that you use unit: instead of -unit and KpiLog instead of KpiLogSpec
Then do not define the variable log.
def log = new KpiLog(scenarioId: 1, kpiId: 2, deltaKpi: 5)
It is reserved for logging in Grails classes (controllers, services, ...). Rename the variable from log to kpiLog
def kpiLog = new KpiLog(scenarioId: 1, kpiId: 2, deltaKpi: 5)
A simple log configuration can be
log4j = {
appenders {
console name: 'stdout', layout: pattern(conversionPattern: '%d [%t] %-5p %c - %m%n')
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
}
Finally, it is working. It was because of the jar msutil in the lib folder. I got this information from one of my friends that it has some incompatibility. I changed the jar which he sent and things started working.
Thanks a lot saw303 for keeping patience with my questions.

Gradle: How do I stop Jetty if JMeter test failed

How do I stop Jetty if JMeter test failed?
My Gradle script:
apply plugin: 'jetty'
apply plugin: 'jmeter'
jmeterRun {
doFirst() {
jettyRunWar.httpPort = 8080 // Port for test
println "Starting Jetty on port:" + jettyRunWar.httpPort
jettyRunWar.daemon = true
jettyRunWar.execute()
}
doLast() {
println "Stopping Jetty"
jettyStopWar.stopPort = 8091 // Port for stop signal
jettyStopWar.stopKey = 'stopKey'
jettyStopWar.execute()
}
jmeterTestFiles = [
file("src/test/jmeter/Tests.jmx")
]
}
You can use the method finalizedBy to ensure that Jetty is stopped no matter whether JMeter runs successfully or fails.
jmeterRun {
dependsOn jettyRunWar
finalizedBy jettyStopWar
}
Try the below settings:
In doFirst()
jettyRunWar.stopPort = 8090
jettyRunWar.stopKey = 'stopKey'
In doLast()
jettyStop.stopPort = 8090
jettyStop.stopKey = 'stopKey'
Not sure if it's a bug related to this Link or that you just need to specify a stopPort for jetty to be listening on.
I was having problems stopping jetty after running the jettyRunWar task in intelliJ but have those 4 settings in my build.gradle allowed me to stop jetty by running the jettyStop task.