Branch coverage with JaCoCo, Emma from IntelliJ - unit-testing

I am trying to measure branch coverage of unit tests for a large Grails application. I am using JaCoCo, Emma and IDEA to collect the metrics from inside IntelliJ, I am getting the following:
JaCoCo (no metrics are shown even for line coverage)
Emma (produces method and line coverage)
IDEA (produces class, method and line coverage)
I am mostly interested in JaCoCo as it should give me Branch Coverage by default. Could someone point me to some tips on how to troubleshoot this?

Actually IntelliJ code coverage tool supports branch coverage though it does not show the results on the summary. Check this article to see how it can be configured and how you can check your branch coverage: https://confluence.jetbrains.com/display/IDEADEV/IDEA+Coverage+Runner
The key is to use Tracing instead of Sampling.

Related

Pytest coverage with line coverage and minimum limits like karma/Istanbul

In Instanbul coverage module for Karma you can set thresholds for different kind of coverages. If some coverage doesnt meet its minimum then instanbul throws an error. This is very usefull when building the project with jenkins and you have to keep such limits. Is it possible to get similar functionality with pytest-cov or any other module?
https://ibb.co/y4J3JrG
pytest-cov generates only statements coverage. Is it possible to get line/code coverage as well?
Coverage.py (which is the engine for pytest-cov) has thresholds for total coverage, but not separate thresholds for different measurements. Look at the --fail-under option.
Coverage.py can measure statement coverage and branch coverage. You mention "line" coverage and "code" coverage: I don't know how those differ from statement coverage.
you can find the option you need as follows:
pytest --help
--cov-fail-under=MIN Fail if the total coverage is less than MIN.

how can i get c++ code overage from google test suite in terminal?

I have started using the Google Test unit testing tools which I am building into a CI pipeline. Is there a code coverage tool that runs in the shell and would allow me to set thresholds and add as a job into the pipeline?
For reference I come from a NodeJS background and use a pipeline as follows:
linter (eslint)
unit tests (jasmine)
code coverage (istanbul coverage && istanbul check-coverage)
The bit i'm struggling with is the third step. In NodeJS I can set the acceptable thresholds and the job fails if these are not met.
I was hoping to replicate this for my C++ code. Is this even possible?
Code coverage is not linked to the test framework you use.
With C++ on Linux, you have to compile your software with special flags to enable the code coverage, e.g. with g++ you have to set the argument --coverage (and disabling all optimisations is also recommended).
When you then run the test programs, you will get a lot of files with the coverage data in them. These can then be collected and evaluated by e.g. lcov.
lcov can create HTML pages with the result, but is also prints the totals of the coverage analysis to stdout. So, you would have to build a script that runs lcov, filters the output and reports error or failure depending on the percentage measured.
Btw, you can set limits for lcov to define when the coverage is sufficient or not, but this is only used for the background color in the HTML output.
On each of these topics you'll find multiple entries here at Stackoverflow, how these tasks can be accomplished.

Code coverage results do not match in local visual studio and TFS build server

recently i created unit test methods for my project solution. when i do code analysis to find out code coverage, it shows 82% code coverage.
But when i checked in my code on TFS, on build server code analysis report shows code coverage as 58%.
Please can someone let me know if they encountered this issue or any possible solution?
In the TFS build definition, did you specify a .runsetting file or Test Filter criteria for code coverage analysis or just choose the "CodeCoverageEnabled" setting?
If you set the filter or .runsettings, that should be the reason why the code coverage results are different. Please see below articles for details.
Configure unit tests by using a .runsettings file
Customizing Code Coverage Analysis
So, If you want to do a comparison, you should be under the same conditions. The filter will exclude those test methods which do not meet the criteria. So not all tests are run, and the code coverage result is not same with developers.
You could delete the filter criteria and test again.
More other reasons to cause the difference please see :Troubleshooting Code Coverage

Looking for a way where with each commit it can be identify which Unit test case is broken

I am looking for a solution where with my every CI build on jenkins i can find with which commit how many and which Unit test cases are broken.
So far i have tried Build Failure Analyzer
But this is not sufficient to get the accurate result.
I am trying the Jacoco-Comparison-Tool. For this there is no Jenkins integration. I am still trying to get a way for this.
Is there any other tools or anything else that can help me to get the UT error/failure reports?
If your project has tests (Unit tests or non-Unit tests), then using JMeter Plugin in Jenkins you can see per build, what all tests passed/failed. https://wiki.jenkins-ci.org/download/attachments/36601991/jmeterV3.jpg?version=1&modificationDate=1260240983000
In Jenkins there's a Test Results Analyzer plugin which also provides some sort of comparison side by side (at class/package level) for N no. of builds with nice charts but it's basically top level info (i.e. it just shows whether this/that test passed/failed in Green/Red color).
There are other plugins (XUnit plugin) that you can try. Also, if you are using SonarQube (analyzing and publishing your tests/results) one can see what happened between two builds (whether the builds failed/passed and to what %).

TeamCity: How do you report low unit test coverage?

We use TeamCity 7 (upgrade to 8 is possible) for continuous integration and we set ourselves a target of unit test coverage 90%. I know how to fail the build if the coverage is lower, but I'd not like to do so, as a missing test will slow down all the development.
On the other hand, I'd like to have clear visibility on the build overview page that the coverage is low - the only option I see is a service message like this one:
##teamcity[buildStatus status='SUCCESS' text='WARN: Test coverage only 89% {build.status.text}']
But that won't send any notification. Do you have any other suggestions, please?
Set the coverage html as artifact and link to it from Teamcity. IE setup a new tab, it will look something like this.