I am using jacoco gradle plugin and currently have a hard coded threshold as verification threshold. I want to set it up such that validation fails if the code coverage falls below master branch after a user adds some additional changes. In other words, coverage should always increase from master and never go down. Is this possible?
Related
Currently i am using these tools to run my tests,code coverage and documentation:
Unit testing:
jasmine
xUnit
Code Coverage:
Istanbul
dotCover
Documentation:
Typedoc
As i'm trying to do everything modular for both frontend and backend we have multiple bower components and nuget packages where of course each components runs different type of tests and documentation.
Now what i want to do is to have a dedicated site which grabs all test results and documentation and have a dedicated site where all developers etc. can use it as a point of reference.
Is there any plugin available that can help me achieve it?
if not do you have any idea from where can i start as i tried googling a bit but with no luck.
I'm using roughly the same technologies.
As a build server I use TeamCity.
In a nutshell: your build is composed by steps, e.g (simplified):
build .sln
gulp build
xUnit tests (*A: publishing coverage)
karma run
remap coverage from Javascript to Typescript (*B: publish coverage)
The only problem I had so far is with the coverage (*A + *B). The last data will overwrite the first one, (not average it all). So in that case I use custom reports page to display the istanbul generated html report and only use the xUnit coverage report.
You could have the coverage.json from istanbul as an artifact of your build, and a second build picks up and reports that coverage through teamcity. It would be simply a coverage report build (only 1 step, report code coverage). The trigger is a successful build generating the coverage.
For your generated documentation you can also use custom reports page.
About the unit tests execution (both jasmine (karma?) and xunit), both report its numbers and the final Test report will show them combined.
As a preface, our setup is somewhat unusual due to "legacy" reasons. It is fully possible that I am going against the grain with this. I would love to get an expert opinion on whether it is possible to get my current setup working or a suggestion on a different approach.
Environment
Java application with over 10K unit tests in JUnit. For legacy reasons the entire unit test run takes a long time (the ultimate goal is to fix the root of the problem, but this will not happen soon).
The application is broken up into multiple modules, with each module having its own unit tests. Executing tests module by module takes a reasonable amount of time, so that if someone commits the code to repo subtree with module's code and only module's tests get executed, they can get a result quickly.
Current Jenkins Setup
JUnit job
This is the single parameterized job that can run tests for any module. The job takes in as parameters the regexs for which tests to run and a parameter indicating which module it is running, for notification purposes. It checks out the whole repo tree and then does the run based on the parameters.
After the completion of the run this job does the analysis of JUnits, publishes the report and sends out email notifications.
Repo watchers
One repo watcher for each module. The watcher checks out only the repo subtree that it wants to monitor. When a change is detected it triggers the JUnit job telling it which tests to run and for what component this is.
Question
In general the setup works well, does exactly what I need, however it breaks a few of the nice and expected features of Jenkins and JUnit plugin.
Because the same job keeps executing different subsets of unit tests, the job to job comparison between unit tests does not provide any value. Without manually scanning between jobs it is not possible to tell what changed in terms of new failures or new fixes to unit tests.
Something very similar happens to change history:
Each repo watches runs on its own schedule. Suppose we have a change to module A and a change to module B, very close time wise to each other. If watcher A triggers first, the JUnit job triggered by watcher A will "claim" both changes. When the JUnit job triggered by watcher B runs, it will not detect any new changes in the repo. This plays havoc with email notifications, as the second JUnit job would not know who broke the build.
At the end of the day I believe I am looking for a way to establish dependency relationship between non sequential job runs in Jenkins for the same job, or alternatively a totally different approach.
Thank you!
Okay so lets see if I get this right in basic language,
You want to track what failures are caused by what changeset?
In which case I would suggest the following: Again in simple term - you need to adapt this to your current setup.
Setup a job that manages results
This job should be parameterized to take the name or change number and publish ALL the results.
This job should be triggered following each test run is completed to consolidate all results
now if a new test or failure is introduced the same job can track it and email the person who caused the failure.
Jenkins is very powerful and very generic so pretty much every secanrio if not plugins with groovy it can be resolved. So I would suggest taking a pen and map it on the board and create process of it rather than just a single job.
I am currently using jenkins to build a list of different modules for my project. I trigger the builds using Maven. I have sonarqube installed on the server and have set it up correctly so that when a module builds it is displayed on sonarqube and includes all of the basic details such as lines of code, technical debt etc. The modules all have Junit tests that run against them, and sonarqube displays this by saying that the Unit Test Sucess is 100% and it also says the number of tests that have been run in that module. However I cannot get the Unit tests coverage field to display anything and it is blank for all of the modules.
Here is an exert (one module) from my pom.xml
customer.sonar.projectBaseDir=.
customer.sonar.sources=D:/TFS/WorkSpace/DEV_2_HYBRID/APP_FO/application/customer/src/main/java
customer.sonar.Hybrid=Customer
customer.sonar.tests=D:/TFS/WorkSpace/DEV_2_HYBRID/APP_FO/application/customer/target/surefire-reports
customer.sonar.junit.reportsPath=D:/TFS/WorkSpace/DEV_2_HYBRID/APP_FO/application/customer/target/surefire-reports
The versions of the software I am using are as follows:
Sonarqube v.5.0,
Jenkins Sonarqube plugin v.2.1,
Maven v3.2.5
As I said at the beginning the unit test success rate does show successfully, so I believe it is only a small change needed that will get the unit test coverage field working.
Any help would be really appreciated!
You need to execute the coverage engine of your choice and provide the report to SonarQube via the appropriate property.
If you are using JaCoCo, the report importer is embeded in the java plugin, for other coverage engine (clover, cobertura...) you have to install the dedicated plugin.
For more information see the dedicated page of documentation.
I’ve installed the SonarQube plugin in intellij and associated my project to our sonar server. The server tells me the branch coverage each class has and updates when I’ve submitted unit tests. However when I run a local analysis (right click on project –> analyze –> run inspection by name –> SonarQube), SonarQube tells me X more branches need to be covered by unit tests to reach the minimum threshold of 65.0% branch coverage for all of my classes and it doesn’t change locally even when I add more unit tests (but it does change on the server)
Any idea why this might be happening?
This is a known current limitation (that actually applies to both IntelliJ and Eclipse plugins): local analyses can't automatically execute unit tests, so they can't get the coverage results and give you the correct information.
The reason for this is that local analyses are just "standard" analyses that don't push data to the server. And by definition, SonarQube analyses don't execute any external tool, they just reuse previously generated reports if they need to.
If it's Maven-based project, you have a solution: just run mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true prior to launching a local analysis. This should generate the coverage report at the default location and should therefore work.
I have a team build (upgrade template, tfs2010, msbuild) compiling and testing a WCF service. We use psexec and the exec task to remote install the service (wix installer) on the web server, prior to running an integration test suite against it. However sometimes our nightly build fails with a compilation error - can only see the first 1024 bytes and most of it is css styles. I've tried to delay the tests with sleeps, thinking it might be due to long JIT. However all 600+ integration tests fails. In the build log it seems that the exec task with psexec executes synchronously, as expected, and returns an exit 0. Could anyone come up with a reason to why this occurs now and then?
This doesn't sound like it's anything specific to TFS, msbuild or psexec -- it sounds like there may be an interim installation, configuration, or coding problem with the service. The point of CI and integration tests is to get early feedback on your process, and apparently something is wrong. The trick is drilling down into the problem and ruling out where the issue(s) reside.
Psexec claims the WiX deployment went fine, but did it? Are all files present? Were previous versions of the installation properly removed, or did they get upgraded correctly?
All 600 tests fail, but the tests don't contain the proper stack trace - can you reproduce the problem outside of the tests? Eg, when the tests fail can you emulate a test manually or run one of the existing tests with a debugger attached to see the same stack trace? One strategy might be to identify one or two specific tests that accurately validate the deployment -- run just these tests after the deployment and if these tests fail you should abort the build and then leave the server in a failed state for deeper analysis. this may require customization the build template but it may be worth the effort.
Can you add logging to the WCF service? Better logging to the tests?
Lastly, CI as mentioned previously is about early feedback. The general rule is, "If something is painful then you should do it more often." Focus on pain points, isolate them and iteratively improve them. When the pain diminishes, focus on other pain points. In your case, consider running your "nightly" build in a rolling fashion - you'll find your intermittent problem after a few runs rather than weeks.