For Java I know the possibility to merge test coverage results on build level by specifying the same path of JaCoCo reports (see SonarQube: Multiple unit test and code coverage result files). This might be transported to SonarQube.
But is it possible to make this on SonarQube level?
I mean from different build servers or different jobs build and test software and combine coverage results at SonarQube side (perhaps by marking the SW version or any kind of given label)?
For me it would be usefull to combine integration and unit tests.
You can combine the result of multiple jobs. You can create two coverage folders, e.g.
- coverage-unit
- coverage-integration
and use the resulting lcov files, e.g.
sonar.javascript.lcov.reportPaths=coverage-unit/lcov.info,coverage-integration/lcov.info
Currently it is not possible to "amend" coverage to an existing analysis. You have to orchestrate your build pipeline so that all kind of coverage reports are produced before you actually starts the SonarQube analysis.
Related
I am having a hard time running a code coverage since most of the tools (including visual studio one) requires unit tests.
I dont understand why do I need to create unit tests, why cant I basically run a code coverage with my own console exe application?
just click F5 and get the report. without putting an effort into creating unit tests or whatever.
thanks
In general, with good test coverage tool, coverage data is collected for whatever causes execution of the code. You should be able to exercise the code by hand, and get coverage data, and/or execute the code via a test case, and get coverage data.
There's a question of how you track the coverage information back to a test. Most coverage tools just collect coverage data; some other mechanism is needed to associate that coverage data with a set-of or specific test. [Note that none of this discussion cares if your tests pass. its up to you whether you want track coverage data for failed tests or not; this information is often useful when trying to find out why the test failed].
You can associate coverage data with tests by convention; Semantic Designs test coverage tools (from my company) do this. After setting up the coverage collection, you run any set of tests, by any means you desire. Each program execution produces a test coverage vector with date stamp. You can combine the coverage vectors for such a set into a single set (the UI helps you do this, or you can do it in batch script by invoking the UI as a command line tool), and then you associate the set of tests you ran with the combined coverage data. Some people associate individual tests with the coverage data collected by the individual test execution. (Doing this allows you later discover if one tests covers everything another test covers, implying you don't need the second one).
You can be more organized; you can configure your automated tests to each exercise the code, and automatically store the generated vector away in place unique to the test. This is just adding a bit of scripting to each automated test. Some specific test coverage tools come with unit test mechanisms that have a way to do it. (Our tools don't insist on a specific test framework, so they work with any framework).
I'we just start playing with setting thresholds when I run my coverage trying to force our team to apply to dedicated threshold standards.
My question is this, is there any need for separate tests and coverage steps? To me it looks like they are doing exactly the same thing? I was thinking of emerging those two steps into on tests-coverage step, does that make sense ?
One reason for running tests and coverage separately is, measuring coverage requires changing the program to support collecting coverage information.
In Java, both Jacoco and Cobertura will modify the bytecode of the class files to add instructions to record coverage. In C++, to use GCov to measure coverage you compile the binaries with different flags to those used to create release binaries.
Therefore, it makes sense to run the tests against the release artifacts to gain confidence that the release artifacts are behaving correctly. Then to measure coverage in a separate run against the instrumented artifacts.
It is, of course, possible to assume that the coverage enabled artifacts will be functionally equivalent to the release artifacts. Therefore, running tests twice is not required. This comes down to your (and your companies) attitude to risk and you can decide to run the tests twice (with and without coverage) or once with coverage enabled.
I have started using the Google Test unit testing tools which I am building into a CI pipeline. Is there a code coverage tool that runs in the shell and would allow me to set thresholds and add as a job into the pipeline?
For reference I come from a NodeJS background and use a pipeline as follows:
linter (eslint)
unit tests (jasmine)
code coverage (istanbul coverage && istanbul check-coverage)
The bit i'm struggling with is the third step. In NodeJS I can set the acceptable thresholds and the job fails if these are not met.
I was hoping to replicate this for my C++ code. Is this even possible?
Code coverage is not linked to the test framework you use.
With C++ on Linux, you have to compile your software with special flags to enable the code coverage, e.g. with g++ you have to set the argument --coverage (and disabling all optimisations is also recommended).
When you then run the test programs, you will get a lot of files with the coverage data in them. These can then be collected and evaluated by e.g. lcov.
lcov can create HTML pages with the result, but is also prints the totals of the coverage analysis to stdout. So, you would have to build a script that runs lcov, filters the output and reports error or failure depending on the percentage measured.
Btw, you can set limits for lcov to define when the coverage is sufficient or not, but this is only used for the background color in the HTML output.
On each of these topics you'll find multiple entries here at Stackoverflow, how these tasks can be accomplished.
We use VSTS with the newest SonarQube tasks and Sonarqube 5.6.1
In Sonarqube we see all the unit test coverage results, except for one item: The nr of unit tests. How/what do we need to configure to have the nr of unit tests also shown in Sonar Qube?
Per SonarC# documentation, you need to import the Unit Test Execution Results, using the applicable property (for example sonar.cs.vstest.reportsPath). The trick is to set the appropriate value, which is not always straightforward in automated environments (e.g. VSTS).
Pending planned improvements with SONARMSBRU-231, you may want to try the workaround mentioned in that ticket:
/d:sonar.cs.vstest.reportsPaths=..\**\TestResults\**\*.trx
(under Advanced, Additional Settings , in the Prepare the SonarQube analysis build step)
I am looking for a solution where with my every CI build on jenkins i can find with which commit how many and which Unit test cases are broken.
So far i have tried Build Failure Analyzer
But this is not sufficient to get the accurate result.
I am trying the Jacoco-Comparison-Tool. For this there is no Jenkins integration. I am still trying to get a way for this.
Is there any other tools or anything else that can help me to get the UT error/failure reports?
If your project has tests (Unit tests or non-Unit tests), then using JMeter Plugin in Jenkins you can see per build, what all tests passed/failed. https://wiki.jenkins-ci.org/download/attachments/36601991/jmeterV3.jpg?version=1&modificationDate=1260240983000
In Jenkins there's a Test Results Analyzer plugin which also provides some sort of comparison side by side (at class/package level) for N no. of builds with nice charts but it's basically top level info (i.e. it just shows whether this/that test passed/failed in Green/Red color).
There are other plugins (XUnit plugin) that you can try. Also, if you are using SonarQube (analyzing and publishing your tests/results) one can see what happened between two builds (whether the builds failed/passed and to what %).