Code coverage results do not match in local visual studio and TFS build server - visual-studio-2017

recently i created unit test methods for my project solution. when i do code analysis to find out code coverage, it shows 82% code coverage.
But when i checked in my code on TFS, on build server code analysis report shows code coverage as 58%.
Please can someone let me know if they encountered this issue or any possible solution?

In the TFS build definition, did you specify a .runsetting file or Test Filter criteria for code coverage analysis or just choose the "CodeCoverageEnabled" setting?
If you set the filter or .runsettings, that should be the reason why the code coverage results are different. Please see below articles for details.
Configure unit tests by using a .runsettings file
Customizing Code Coverage Analysis
So, If you want to do a comparison, you should be under the same conditions. The filter will exclude those test methods which do not meet the criteria. So not all tests are run, and the code coverage result is not same with developers.
You could delete the filter criteria and test again.
More other reasons to cause the difference please see :Troubleshooting Code Coverage

Related

How do I get SonarQube to analyse Code Smells in unit tests but not count them for coverage reports?

I have C++ project being analysed with the commercial SonarQube plugin.
My project has, in my mind, an artificially high code coverage percentage reported as both the "production" source code lines and Unit Test code lines are counted. It is quite difficult to write many unit test code lines that are not run as a part of the unit testing, so they give an immediate boost to the coverage reports.
Is it possible to have the Unit Test code still analysed for Code Smells but not have it count towards the test coverage metric?
I have tried setting the sonar.tests=./Tests parameter (where ./Tests is the directory with my test code. This seems to exclude the test code from all analysis, leaving smells undetected. I would rather check that the test code is of good quality than hope it is obeying the rules applied to the project.
I tried adding the sonar.test.inclusions=./Tests/* in combination with the above. However, I either got the file path syntax incorrect or setting this variable causes a complete omission of the Test code, so that it no longer appears under the 'Code' tab at all as well as being excluded.
The documentation on Narrowing the Focus of what is analysed is not all the clear on what the expected behaviour is, at least to me. Any help would be greatly appreciated as going through every permutation will be quite confusing.
Perhaps I should just accept the idea that with ~300 lines of "production" code and 900 lines of stubs, mocks and unit tests a value of 75% test coverage could mean running 0 lines of "production" code. I checked and currently, my very simple application is at about that ratio of test code to "production" code. I'd expect the ratio to move more towards 50:50 over time but it might not do.
One solution I found was to have two separate SonarQube projects for a single repository. The first you setup in the normal way, with the test code excluded via sonar.tests=./Tests. The second you make a -test repository where you exclude all your production code.
This adds some admin and setup but guarantees that coverage for the normal project is a percentage of only the production code and that you have SonarQube Analysis performed on all your test code (which can also have coverage tracked and would be expected to be very high).
I struggle to remember where I found this suggestion a long time ago. Possibly somewhere in the SonarQube Community Forum, which is worth a look if you are stuck on something.

Using Test filter criteria to run only certain xunit tests

I'm trying to only run a certain subsection of xunit tests using the tfs Test Filter Criteria:
The problem is that this is not doing anything, it still runs every test regardless of it's name, what might I be missing here? Is there something else I need to be doing since I'm using Xunit? Or is there another method of excluding tests in the test unit step I can use.
Test filter criteria: Filters tests from within the test assembly files. For example, “Owner=james&Priority=1”. This option works the
same way as the console option /TestCaseFilter for vstest.console.exe
For more information, see
https://msdn.microsoft.com/en-us/library/jj155796.aspx
Please double check your property of FullyQualifiedName in your assembly files. I'm not sure if /TestCaseFilter is also support in xUnit. You could give a try with directly running the test using command line (vstest.console.exe). If it's not work, then this should also not work in TFS build task.
Add a related link talk about test filter for your reference: VSTS/TFS VISUAL STUDIO TEST TASK – FILTER CRITERIA

Branch coverage with JaCoCo, Emma from IntelliJ

I am trying to measure branch coverage of unit tests for a large Grails application. I am using JaCoCo, Emma and IDEA to collect the metrics from inside IntelliJ, I am getting the following:
JaCoCo (no metrics are shown even for line coverage)
Emma (produces method and line coverage)
IDEA (produces class, method and line coverage)
I am mostly interested in JaCoCo as it should give me Branch Coverage by default. Could someone point me to some tips on how to troubleshoot this?
Actually IntelliJ code coverage tool supports branch coverage though it does not show the results on the summary. Check this article to see how it can be configured and how you can check your branch coverage: https://confluence.jetbrains.com/display/IDEADEV/IDEA+Coverage+Runner
The key is to use Tracing instead of Sampling.

how to check code coverage of specific controller in mvc?

I want to know the steps for analyzing code coverage of any specific controller.
I am working on ASP.NET MVC4 with Visual Studio 2012 and TFS setup.
I know that with can analyze code coverage of all of the controllers from test explorer window "Analyze code coverage for all controller" option.
But i want to check the code coverage of any one controller.
First you can group them by the Class Name in the Test Explorer window. Then find the specific Controller you need the Code Coverage. Then Right Click on the at Controller, and select "Analyze Code Coverage For Selected Tests". This would produce the Code Coverage for the specific Controller you selected. Note that you see all other Controllers/Types have been instrumented in the Code Coverage Result window, however they don't have any coverage result. It would just say Covered (Blocks) = 0.

OpenCover results not matching TFS numbers

We have VS 2008 project (asp.net). It is under TFS. We have written our unit tests using mock framework(nunit?). Developers can execute the tests on their machine and view the code coverage.
Now, we have upgraded our solution to VS 2012 professional. Being professional edition, it does not have support to execute code coverage and so I have ventured into trying out OpenCover.
Problem is that on build via TFS, code analysis shows say 24% as code coverage, but when I execute OpenCover locally on developer machine, it shows completely different figure. We need to aim for what TFS is reporting as that is monitored by or organization automated ALM compliance engines and developers need to be aware that TFS code coverage % does not fall below X.
My OpenCover synatx is:
OpenCover.Console.exe -register:user -target:"C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\MSTest.exe" -targetargs:"/noisolation /testcontainer:"C:\code\APRRel\UnitTest\bin\Debug\unittest.dll" /resultsfile:C:\Reports\MSTest\APRRel.trx" -filter:"+[*]*" -mergebyhash -output:C:\Reports\MSTest\projectCoverageReport.xml
Here are my outputs from TFS and OpenCover:
How can i go around making opencover report similar statistics than that of TFS?
You are trying, as they say, to compare apples with pears. The two tools are different and they have differing ways of instrumenting the code. Also the number shown in ReportGenerator is based on the number of covered lines. OpenCover has a different number based on the number of sequence points (this is not a 1:1 relationship with lines more of a n:m).
If you open the actual opencover output you will see a summary line near the top like this
<Summary numSequencePoints="895" visitedSequencePoints="895" numBranchPoints="537" visitedBranchPoints="455" sequenceCoverage="100" branchCoverage="84.73" maxCyclomaticComplexity="8" minCyclomaticComplexity="0" />
The sequence coverage is the one you want to look at as this metric is probably what VS coverage is referring to as block-based but again both tools may differ due to differing instrumentation practices (apples vs pears).
Next you need to consider your filter, +[*]* will include all assemblies that have a PDB file. Try excluding your test assemblies with the following filter +[*]* -[*.Tests]* - assumes your test assemblies end with .Tests.
However, IMO, if you want your devs to know the coverage on the build system to be the same as the desktop then they really should run the same tools in both locations and in the same configuration; debug and release coverage can differ due to the compiler creating different IL and this affects the instrumentation and hence the numbers but they are usually within 0-2% depending on your codebase.