I've a server running a proprietary language on which I'm able to run "unit tests" in this language. I cannot install a Hudson slave on this machine, but would like to have these tests results appearing in a job of hudson (to have at least a monitoring of the code quality for this server code).
I'm currently trying to use web services to get the results and store them in Hudson workspace, but I do fear it is not the right solution.
What solutions can you advice me ?
I finally have gotten through the web services path, although it was not easy.
There are some steps in this path
I created a maven mojo with groovy (see GMaven for more infos) which, using groovyws, called a web service that, from tests results, creates the junit report.
Armed with this mojo, I created a maven project that called the web service and stores the junit.xml file in an output folder
Finally, i created in hudson a maven job for this project and called it regularly. Thanks to junit reporting integration in maven builds, my tests results are visible as a graph in Hudson and user can drill down to failing tests.
Not sure if these are possible but...
Maybe one option is when the build job finished execute a second build target or script to scp the test results from the remote server to the local build server so they appear in hudson
Or if the platform allows
Map a directory on the remote machine to the local file system by using something like sshfs etc
karl
Yup, you can scp or whatever the results (in junit xml format) to the current workspace dir using a script task. Then have a "Publish JUnit test result report" post-build task & point it at the copied-in files.
Obviously if it's not in junit-compatible format you'll have to convert it.
Sounds like you're on the right path though
Related
How do I run a specific set of appium java testng test cases in AWS Device farm? I could see that device farm ignores all the annotations for testNG including group and enabled. Is there any way out?
To only run a subset of tests the project will need to include the testng.xml file in the root of the *-tests.jar. Here is a GitHub post I've authored showing how to do that.
https://github.com/aws-samples/aws-device-farm-appium-tests-for-sample-app/pull/14
In the standard environment the tests are parse and executed individually. As a result some testng features like priority and nested groups don't get honored. Also tests are executed slower because the Appium server would be restarted between tests.
https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-environments.html#test-environments-standard
If these feature are needed the project will need to use the custom environments in Device Farm.
https://docs.aws.amazon.com/devicefarm/latest/developerguide/custom-test-environments.html
This produces one set of logs and video of all the tests since the test package is not parsed.
Hth
-James
I had configured JaCoCo in WebSphere as JavaAgent (Refer: http://www.jacoco.org/jacoco/trunk/doc/agent.html).
Restarted the server, and ran a series of automated tests on the application (to give some load) and then stopped the server.
I can see the jacoco.exec getting generated in the Server (as configured to /tmp/ location).
Now, How do I generate the HTML report ?
Before voting down this question or marking it as duplicate, here is the reason why I'm posting this question. I went through the JaCoCo Documentation like http://www.jacoco.org/jacoco/trunk/doc/maven.html and also multiple StackOverflow questions but still I'm confused.
What I understood is that the Maven plugin allows us to run the Unit tests, Integration tests and then generate a report.
What I'm looking for is a report based on the load I had given to my application deployed in Websphere. I can see the jacoco.exec file generated, but not sure from the documentation on how to generate the HTML reports.
Thanks in advance.
You could use the jacoco:report-aggregate goal with Maven.
You could refer this http://www.eclemma.org/jacoco/trunk/doc/report-aggregate-mojo.html
P.S. : However, when i had the same issue, I used Sonar to read the exec file that was generated. It gives much more than just code coverage.
I was able to generate JaCoCo report as follows :
Configured JaCoCo as Java Agent
Restart the Server and do some transactions/give some load (in my case I ran a series of automated tests)
Stop the Server (This will actually generate the jacoco.exec file)
Create/Configure the Ant script and run it (This will read the jacoco.exec file and generate the html report). Refer : http://www.eclemma.org/jacoco/trunk/doc/ant.html
Even though my Project is a Maven project, I used ant script for the Report generation. I automated all the above steps using Bamboo and it made running and maintaining this Job easier.
I am attempting to set up Java code coverage for a fairly complex app that
combines multiple large modules, only one of which I need to check coverage on
uses a combination of ant and Maven for builds
cannot be run except as an installed application on a server, with configuration
the automated tests to be analyzed for coverage are not part of the application build and make use of API calls to the application server from a remote client
The examples given in the jacoco documentation and in the online sources I have found assume the app under test is not previously installed and the tests are unit/integration tests run as part of the build. The documentation does not cover the details of how the jacoco instrumentation is done or when the call is recorded to a particular line of code. If I use ant or maven to instrument a particular module, use that module to build the full app, install it on a server, and configure it, will my remote tests then generate the .exec file?
Any advice on how to achieve the end goal (knowing how much of our code is covered by the tests) is greatly appreciated, including better search terms than "jacoco for installed app" which as you can imagine is ... not very useful. My google-fu is humbled.
So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.
In VS2013 you can run the compiler for native code with the /analyze flag that will generate .xml files holding the output of the analyze. This will be interpreted by the UI and shown to the developer.
Is there a solution on how to integrate this into a Jenkins build or are any tool which can read such .xml files like the vc.nativecodeanalysis.all.xml and display it as a web page?
Jenkins is essentially a dashboard that outsources tasks to other tools to "do their thing".
With .NET builds your only option is a freestyle build with heavy use of Windows Batch command post build steps or MSBuild steps. Jenkins only knows the path to MSBuild that you tell it in "Manage Jenkins > Configure System"
After that it will outsource msbuild Post Build steps to MSBuild with the parameters you pass it.
Jenkins can consume JUnit test results and many other tools are written to convert test results into JUnit for Jenkins to consume. The mountain is brought to Jenkins rather than the other way around.
If MSBuild doesn't produce a graph, Jenkins unfortunately won't be able to.
You might keep an eye on the Static Analysis plugin over time to see if it adds support for this.