After importing the jacoco session into eclipse it shows zero coverage - jacoco

I was following the following steps to measure the code coverage using jacoco:
I have instrumented my application wars files on the fly using jacoco agent.
Started the tomcat server.
Run some test cases.
Stopped tomcat server.
After stopping the tomcat server .exec coverage file gets generated in the destination folder.
In eclipse I opened my project work space.
I had imported the .exec file, but it shows 0% coverage for my maven multi-module project.
Please provide the steps how can I fetch exact coverage report from this .exec file.

Could it be that the classes deployed into Tom Cat were compiled differently, at a different time or with a different JDK from those in Eclipse?
The EclEmma import docs at http://www.eclemma.org/userdoc/importexport.html say
Warning: Imported execution data must be based on the exact same class files that are also used within the Eclipse IDE. If the external launch was based on different class files (e.g. created with different compiler) no coverage will be shown.
The execution data stores a hash of the class it was generated for, so if it doesn't match the hash corresponding class in Eclipse you'll see no coverage.

Make sure that the code base version is the same in both eclipse and what is deployed on the server.
JaCoCo creates Hashes of the files and if there are changes in the files, the hash gets screwed up and you will get a zero coverage.

Related

How to generate report for JaCoCo when used as Java Agent ?

I had configured JaCoCo in WebSphere as JavaAgent (Refer: http://www.jacoco.org/jacoco/trunk/doc/agent.html).
Restarted the server, and ran a series of automated tests on the application (to give some load) and then stopped the server.
I can see the jacoco.exec getting generated in the Server (as configured to /tmp/ location).
Now, How do I generate the HTML report ?
Before voting down this question or marking it as duplicate, here is the reason why I'm posting this question. I went through the JaCoCo Documentation like http://www.jacoco.org/jacoco/trunk/doc/maven.html and also multiple StackOverflow questions but still I'm confused.
What I understood is that the Maven plugin allows us to run the Unit tests, Integration tests and then generate a report.
What I'm looking for is a report based on the load I had given to my application deployed in Websphere. I can see the jacoco.exec file generated, but not sure from the documentation on how to generate the HTML reports.
Thanks in advance.
You could use the jacoco:report-aggregate goal with Maven.
You could refer this http://www.eclemma.org/jacoco/trunk/doc/report-aggregate-mojo.html
P.S. : However, when i had the same issue, I used Sonar to read the exec file that was generated. It gives much more than just code coverage.
I was able to generate JaCoCo report as follows :
Configured JaCoCo as Java Agent
Restart the Server and do some transactions/give some load (in my case I ran a series of automated tests)
Stop the Server (This will actually generate the jacoco.exec file)
Create/Configure the Ant script and run it (This will read the jacoco.exec file and generate the html report). Refer : http://www.eclemma.org/jacoco/trunk/doc/ant.html
Even though my Project is a Maven project, I used ant script for the Report generation. I automated all the above steps using Bamboo and it made running and maintaining this Job easier.

How to use Jacoco with application that requires install and configuration to run

I am attempting to set up Java code coverage for a fairly complex app that
combines multiple large modules, only one of which I need to check coverage on
uses a combination of ant and Maven for builds
cannot be run except as an installed application on a server, with configuration
the automated tests to be analyzed for coverage are not part of the application build and make use of API calls to the application server from a remote client
The examples given in the jacoco documentation and in the online sources I have found assume the app under test is not previously installed and the tests are unit/integration tests run as part of the build. The documentation does not cover the details of how the jacoco instrumentation is done or when the call is recorded to a particular line of code. If I use ant or maven to instrument a particular module, use that module to build the full app, install it on a server, and configure it, will my remote tests then generate the .exec file?
Any advice on how to achieve the end goal (knowing how much of our code is covered by the tests) is greatly appreciated, including better search terms than "jacoco for installed app" which as you can imagine is ... not very useful. My google-fu is humbled.

Unit Test Coverage not displaying on Sonarqube - Running through Jenkins Sonar plugin - Test Success displays correctly

I am currently using jenkins to build a list of different modules for my project. I trigger the builds using Maven. I have sonarqube installed on the server and have set it up correctly so that when a module builds it is displayed on sonarqube and includes all of the basic details such as lines of code, technical debt etc. The modules all have Junit tests that run against them, and sonarqube displays this by saying that the Unit Test Sucess is 100% and it also says the number of tests that have been run in that module. However I cannot get the Unit tests coverage field to display anything and it is blank for all of the modules.
Here is an exert (one module) from my pom.xml
customer.sonar.projectBaseDir=.
customer.sonar.sources=D:/TFS/WorkSpace/DEV_2_HYBRID/APP_FO/application/customer/src/main/java
customer.sonar.Hybrid=Customer
customer.sonar.tests=D:/TFS/WorkSpace/DEV_2_HYBRID/APP_FO/application/customer/target/surefire-reports
customer.sonar.junit.reportsPath=D:/TFS/WorkSpace/DEV_2_HYBRID/APP_FO/application/customer/target/surefire-reports
The versions of the software I am using are as follows:
Sonarqube v.5.0,
Jenkins Sonarqube plugin v.2.1,
Maven v3.2.5
As I said at the beginning the unit test success rate does show successfully, so I believe it is only a small change needed that will get the unit test coverage field working.
Any help would be really appreciated!
You need to execute the coverage engine of your choice and provide the report to SonarQube via the appropriate property.
If you are using JaCoCo, the report importer is embeded in the java plugin, for other coverage engine (clover, cobertura...) you have to install the dedicated plugin.
For more information see the dedicated page of documentation.

SonarQube in intellij not picking up unit test branch coverage when running analysis

I’ve installed the SonarQube plugin in intellij and associated my project to our sonar server. The server tells me the branch coverage each class has and updates when I’ve submitted unit tests. However when I run a local analysis (right click on project –> analyze –> run inspection by name –> SonarQube), SonarQube tells me X more branches need to be covered by unit tests to reach the minimum threshold of 65.0% branch coverage for all of my classes and it doesn’t change locally even when I add more unit tests (but it does change on the server)
Any idea why this might be happening?
This is a known current limitation (that actually applies to both IntelliJ and Eclipse plugins): local analyses can't automatically execute unit tests, so they can't get the coverage results and give you the correct information.
The reason for this is that local analyses are just "standard" analyses that don't push data to the server. And by definition, SonarQube analyses don't execute any external tool, they just reuse previously generated reports if they need to.
If it's Maven-based project, you have a solution: just run mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true prior to launching a local analysis. This should generate the coverage report at the default location and should therefore work.

Is there an alternative for using a .testsettings file with TestCases and Microsoft Test Manager?

We have a peculiar situation here that is causing our automated tests to fail on a newly created lab environment, using TFS 2012.
We've always had a bunch of 'unit' tests that tested our DAL code, which in turn uses the Enterprise Library Data Application Block to perform operations on the database. This was setup quite a few years ago, to enable our clients to choose either SqlServer or Oracle databases alongside our product, taking advantage of the DatabaseFactory class and all the supporting generic interfaces and classes in the entlib.data. I mentioned 'unit' like this because these are actually not pure unit tests but integration ones, seeing as they require a real database to work.
To test the same SQL code against both databases, we maintain two separate .config files inside a 'Resources' folder in our TFS project branch, pointing to our test databases:
Resources\SqlServer\ConnectionStrings.config (SqlServer specific connection strings)
Resources\Oracle\ConnectionStrings.config (Oracle specific connection strings)
In the root Resources folder, there are two accompanying .testsettings files, responsible for deploying files specific to each database:
Resources\SqlServer.testsettings (which deploys the SqlServer\ConnectionStrings.config file)
Resources\Oracle.testsettings (which deploys the Oracle\ConnectionStrings.config file)
Since the whole structure is in source control, the testsettings is able to find the .config files by using relative paths, allowing us to test everything without having to setup parameters manually.
On devs machines, we always select the SqlServer.testsettings file when running the tests, so that they don't need to have the whole oracle environment installed to validate their changes before checking in the code. The Oracle side of the validation always occurred in our build process, where we actually test every method twice: first using the same SqlServer.testsettings used by the developers, and then using the Oracle.testsettings.
This way, we can setup our test assemblies' app.configs to redirect the connectionStrings node to an external file, like this:
<configuration>
<connectionStrings configSource="ConnectionStrings.config"/>
...
When the tests are run, mstest copies the adequate ConnectionStrings.config file to the test's working folder, based on which .testsettings was used to initiate the run.
This was working fine until today, when I discovered that tests started through Microsoft Test Manager ignore the Visual Studio .testsettings files. Now I'm trying to run these same tests in our lab environment but the ConnectionStrings.config files are not deployed (understandably) and the tests fail.
How can we achieve this without using .testsettings files? After having huge headaches trying to setup oracle correctly in our new x64 build server, we disabled Oracle tests in the build definition. Now that we started setting up our lab environment, we thought about having one of the machines in it configured with our whole system using Oracle, enabling us to again run these 'unit tests' with oracle-specific connection strings to validate our queries. At the same time, we want to keep testing everything locally and on the build server using SqlServer also.
I think using [DeploymentItem] in this case is impossible, since it is meant for static files and not selectable, dynamic ones like our current setup.
Is there any equivalent to the .testsettings deployment process that we could use with TestCases inside MTM/Lab Env? On the Properties tab for our TestPlan, I can see the Automated Runs -> Test Settings option, but that only seems to allow deployment by specifying absolute paths (which will actually be resolved on the target machines). Is there a way to specify a relative path there, pointing to our ConnectionStrings.config files checked in on TFS? Maybe yet another alternative exists that I'm missing, perhaps using multiple build configurations?
Create separate build configurations for each of the server types by going into Configuration Manager and click New under Active solution configurations. Edit the project file and do something like this:
<PropertyGroup Condition="'$(Configuration)' == 'Oracle'">
<appConfig>App.Oracle.Config</AppConfig>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'SQL'">
<appConfig>App.SQL.Config</AppConfig>
</PropertyGroup>
Then ensure you have the correct connection strings in each of the config files. You can then configure TFS to build using those build configurations.
More info on using PropertyGroup and Condition, MSBuild Configurations and MSBuild project properties