Is there a way to jump to test code that is already in place? for example, let's say I have a method and some of the code is 'covered' after I run with coverage. How would I be able to tell what unit tests are covering this code without manually checking each test?
#jonrsharpe gave me the resources to answer this question, so thanks!
check this out here, it will give you all the information you will need: info on viewing code coverage results
this feature can be accessed by clicking on the coloured line by your code and clicking on a button called 'show tests covering line' this option is only available if you enable 'tracing' for your code coverage. this option can be enabled from configuring the test task. info on this bit here: info on configuring run profiles for code coverage
Related
I have C++ project being analysed with the commercial SonarQube plugin.
My project has, in my mind, an artificially high code coverage percentage reported as both the "production" source code lines and Unit Test code lines are counted. It is quite difficult to write many unit test code lines that are not run as a part of the unit testing, so they give an immediate boost to the coverage reports.
Is it possible to have the Unit Test code still analysed for Code Smells but not have it count towards the test coverage metric?
I have tried setting the sonar.tests=./Tests parameter (where ./Tests is the directory with my test code. This seems to exclude the test code from all analysis, leaving smells undetected. I would rather check that the test code is of good quality than hope it is obeying the rules applied to the project.
I tried adding the sonar.test.inclusions=./Tests/* in combination with the above. However, I either got the file path syntax incorrect or setting this variable causes a complete omission of the Test code, so that it no longer appears under the 'Code' tab at all as well as being excluded.
The documentation on Narrowing the Focus of what is analysed is not all the clear on what the expected behaviour is, at least to me. Any help would be greatly appreciated as going through every permutation will be quite confusing.
Perhaps I should just accept the idea that with ~300 lines of "production" code and 900 lines of stubs, mocks and unit tests a value of 75% test coverage could mean running 0 lines of "production" code. I checked and currently, my very simple application is at about that ratio of test code to "production" code. I'd expect the ratio to move more towards 50:50 over time but it might not do.
One solution I found was to have two separate SonarQube projects for a single repository. The first you setup in the normal way, with the test code excluded via sonar.tests=./Tests. The second you make a -test repository where you exclude all your production code.
This adds some admin and setup but guarantees that coverage for the normal project is a percentage of only the production code and that you have SonarQube Analysis performed on all your test code (which can also have coverage tracked and would be expected to be very high).
I struggle to remember where I found this suggestion a long time ago. Possibly somewhere in the SonarQube Community Forum, which is worth a look if you are stuck on something.
I'm using IDEA in order to analyze our code coverage.
I can see that some lines were called N number of times. But I also want to know which test caused that line call.
I see the appropriate button "Show tests covering line", but this button is disabled for all of the lines.
So what is the reason of that behavior and is it possible to force IDEA to show tests which called a particular line of the code
From the docs:
For JUnit tests, you can open the test that covers a line in a separate dialog. To do so, click the the Show Tests Covering Line icon in the popup. To be able to use this feature, enable the Tracing mode and Track per test coverage options for the current run/debug configuration in the Code Coverage area. For more information, refer to Set coverage in run configurations.
The Set Coverage in run configurations page details how to do this for your IDE version. Newer versions will have a Coverage Tab, older versions will not. The set up instructions for both are detailed in this page.
recently i created unit test methods for my project solution. when i do code analysis to find out code coverage, it shows 82% code coverage.
But when i checked in my code on TFS, on build server code analysis report shows code coverage as 58%.
Please can someone let me know if they encountered this issue or any possible solution?
In the TFS build definition, did you specify a .runsetting file or Test Filter criteria for code coverage analysis or just choose the "CodeCoverageEnabled" setting?
If you set the filter or .runsettings, that should be the reason why the code coverage results are different. Please see below articles for details.
Configure unit tests by using a .runsettings file
Customizing Code Coverage Analysis
So, If you want to do a comparison, you should be under the same conditions. The filter will exclude those test methods which do not meet the criteria. So not all tests are run, and the code coverage result is not same with developers.
You could delete the filter criteria and test again.
More other reasons to cause the difference please see :Troubleshooting Code Coverage
I am trying to measure branch coverage of unit tests for a large Grails application. I am using JaCoCo, Emma and IDEA to collect the metrics from inside IntelliJ, I am getting the following:
JaCoCo (no metrics are shown even for line coverage)
Emma (produces method and line coverage)
IDEA (produces class, method and line coverage)
I am mostly interested in JaCoCo as it should give me Branch Coverage by default. Could someone point me to some tips on how to troubleshoot this?
Actually IntelliJ code coverage tool supports branch coverage though it does not show the results on the summary. Check this article to see how it can be configured and how you can check your branch coverage: https://confluence.jetbrains.com/display/IDEADEV/IDEA+Coverage+Runner
The key is to use Tracing instead of Sampling.
I have a huge group of unit tests that I'm running against our software where I work. I'm running them using IntelliJ IDEA. I'm using Spock and Groovy to make these tests, but they are, in turn, using JUnit. So anything that applies to JUnit should apply here as well... (in theory)
In order to figure out what coverage is I'm right clicking on the root of the code I want to run and simply selecting "Test in 'example' with Coverage".
IntelliJ then proceeds to try and run the 270 or some odd tests we have in this part of the project. The problem is that it isn't running a couple of them and I can't figuring out why for the life of me.
I've googled the issue and turned up nothing substantial. The list of tests in IntelliJ only tells me that these couple of tests didn't start, it doesn't give a reason at all. I tried checking the logs, but all it says about these particular tests is that it's trying to open them, no comment on how successful they were or were not, and why they aren't working.
If someone could just point me in a useful direction? I need to expand my test bases coverage and being able to see what code is actually covered is a pretty critical...
Some Clarification:
The tests can be run on their own, and they do computer coverage for themselves if you do so. They only can't be run when I run them in a batch.
I'm pretty sure it's not a display issue because I get several different reports telling me that the tests were not run. So, more than just the frozen yellow spinner it also warns me with a red "attention" bubble that the tests did not run.