Dotcover is reporting no covearge if correpronding tests are failing - dotcover

Dotcover reports that a particular function/method in a file doesnt have coverage when there is actually a test for the function/method.
I noticed that this happens when the corresponding test is failing and so dotcover flags that this method doesnt have coverage. If the test passes, I dont see this issue.
I am wondering if this is intentional or if there are any settings that we can change. I dont want dotcover to flag lines of code as missing coverage because of a failing test.
Any help is much appreciated.

dotCover displays coverage results and treats code as covered regardless of test failures. The important thing is that the code within a test is invoked before a failure occurs. For example, if the corresponding ‘SetUp’ method fails, then coverage data for this test will be missing.

Related

How do I get SonarQube to analyse Code Smells in unit tests but not count them for coverage reports?

I have C++ project being analysed with the commercial SonarQube plugin.
My project has, in my mind, an artificially high code coverage percentage reported as both the "production" source code lines and Unit Test code lines are counted. It is quite difficult to write many unit test code lines that are not run as a part of the unit testing, so they give an immediate boost to the coverage reports.
Is it possible to have the Unit Test code still analysed for Code Smells but not have it count towards the test coverage metric?
I have tried setting the sonar.tests=./Tests parameter (where ./Tests is the directory with my test code. This seems to exclude the test code from all analysis, leaving smells undetected. I would rather check that the test code is of good quality than hope it is obeying the rules applied to the project.
I tried adding the sonar.test.inclusions=./Tests/* in combination with the above. However, I either got the file path syntax incorrect or setting this variable causes a complete omission of the Test code, so that it no longer appears under the 'Code' tab at all as well as being excluded.
The documentation on Narrowing the Focus of what is analysed is not all the clear on what the expected behaviour is, at least to me. Any help would be greatly appreciated as going through every permutation will be quite confusing.
Perhaps I should just accept the idea that with ~300 lines of "production" code and 900 lines of stubs, mocks and unit tests a value of 75% test coverage could mean running 0 lines of "production" code. I checked and currently, my very simple application is at about that ratio of test code to "production" code. I'd expect the ratio to move more towards 50:50 over time but it might not do.
One solution I found was to have two separate SonarQube projects for a single repository. The first you setup in the normal way, with the test code excluded via sonar.tests=./Tests. The second you make a -test repository where you exclude all your production code.
This adds some admin and setup but guarantees that coverage for the normal project is a percentage of only the production code and that you have SonarQube Analysis performed on all your test code (which can also have coverage tracked and would be expected to be very high).
I struggle to remember where I found this suggestion a long time ago. Possibly somewhere in the SonarQube Community Forum, which is worth a look if you are stuck on something.

Code coverage results do not match in local visual studio and TFS build server

recently i created unit test methods for my project solution. when i do code analysis to find out code coverage, it shows 82% code coverage.
But when i checked in my code on TFS, on build server code analysis report shows code coverage as 58%.
Please can someone let me know if they encountered this issue or any possible solution?
In the TFS build definition, did you specify a .runsetting file or Test Filter criteria for code coverage analysis or just choose the "CodeCoverageEnabled" setting?
If you set the filter or .runsettings, that should be the reason why the code coverage results are different. Please see below articles for details.
Configure unit tests by using a .runsettings file
Customizing Code Coverage Analysis
So, If you want to do a comparison, you should be under the same conditions. The filter will exclude those test methods which do not meet the criteria. So not all tests are run, and the code coverage result is not same with developers.
You could delete the filter criteria and test again.
More other reasons to cause the difference please see :Troubleshooting Code Coverage

TFS2017 (RTM): Unit test summary in build dont show actual test results

After a build that runs c# unit tests, the number of test are shown but the test results of individual tests are not provided.See
However the test results are published. See log. And are available when I search them in the test tab...
Finally here is the test task from the build definition:
Build Definition
Any Idea, what might be wrong?
Which you got is correctly, cause you have selected failed (English) in the outcome and you have no failed tests at all.
You just need to change the filter to all in outcome, you will get all tests shown.

MSTest. Is it possible that it would keep count of ignored tests?

I would like to generate statistics from the test run, and would like to keep track of these 4 numbers:
failed / passed / inconclusive and ignored tests.
My question is... is it possible to get the number of skipped/ignored tests (this is a test method attributed with [Ignore]).
I'm not aware of any working solutions for this yet.
Please see this issue in Microsoft tracker: mstest.exe report file (trx) does not list ignored tests. The status for it is "Closed as Deferred".
However, the current design seems to make more sense to me. Statistics for test runs should only include the tests that are supposed to be executed. Tests marked with [Ignore] should not be considered as a part of test run. If people have excluded some tests from test runs intentionally, then why would they want to see them in the test run results?
But I'd like to see this feature personally. More statistics won't hurt after all.

How to print test summary using boost unit test

Is there a way to print a summary of the tests run in boost unit test. In particular, can a listing of the failed tests be made?
I'm having a hard time locating failing tests in the output (especially when the tests have their own output). I already set BOOST_TEST_LOG_LEVEL in order to show enter/exit, but that isn't enough to locate the failing tests.
Use the option: --report_level=detailed
It will report all your tailing test cases and suites.
I like the --report_level=short option
Test module "Main" has passed with:
796 test cases out of 796 passed
75790 assertions out of 75790 passed