I'm currently running a a build job on Jenkins that generates a bunch of CUnit testing exes. What I'd like to do is take those binaries and run them automatically on a bunch of other machines upon successful completion of the build.
For example: Run the build -> success -> trigger copy of EXEs to other machines -> run said EXEs -> gather output.
My question is whether or not this is possible to automate with jenkins? I'm not entirely sure the direction I should be going in. My best guess is to configure a bunch of other jobs that will trigger on successful completion of the Build job. These jobs will retrieve the files in question from somewhere, run them, and report back.
Any input would be greatly appreciated.
In the post-build actions of your build job, mark the generated executables as artifacts, then you can use the Copy Artifact plugin to distribute the test executables to another test job (or more than one) that runs a Jenkins build slave on the test machine(s). As you've mentioned, you can configure a successful build to trigger the test jobs. Based on other answers, it looks like CUnit generates an XML report of the test output that Jenkins can parse, so in the test job's post-build actions, configure the location of the test results.
From a management perspective, it is easier if there is one test job because you don't have to figure out how to partition the executables and you can read the results in one report. But depending on your use case, it might make more sense to have separate test jobs if the tests require different environments or if it makes sense to partition the test results.
Related
I have some cpp/header files in Github that I am currently building through 2 Jenkins jobs.
The first Jenkins job will build the tests and the second will run the tests. I could have combined both into one job, but I can't publish the test results of Google Tests with JUnit due to this issue Append multiple xml test reports of google test, so separating both jobs would make it clearer.
My solution was making the test job share the same workspace as the build job and triggering it once the build is stable.
I am new to Jenkins and don't know if this is a typical workflow. Is there any other idea on how to do this or is this reasonable?
So I have a scenario whereby I have many different test suites. They are all triggered by a Create Test Environment step. However, these test suites cannot run concurrently on the same environment, as they would interfere with each other. To alleviate this, I added a shared resource in TeamCity and configured the build definitions to block on this resource, so that only one test suite runs at a time. This works.
However, if while the test suites for Environment A are running, another code change is checked in, Environment B can be created by the Create Test Environment step, and all the test suites are re-queued. Currently, due to the fact that they all share a shared resource that they block on, those tests then sit in the queue awaiting access to the shared resource. However, there is no reason that the tests for Environment B cannot run (one build at a time) in parallel with the tests for Environment A. How can I best tweak my TeamCity configuration to achieve this?
It seems that you are looking for Matrix builds. This feature is not implemented in TeamCity. As workaround you can create different build configurations for different environments. You can use TeamCity templates to simplify the setup. For more details see the related comment in the TeamCity issue tracker.
So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.
I'm trying to figure out how to model my build process in hudson. At present most of our hudson builds are somewhat hard coded in that the build process is series of steps and we have one process per branch.
I have another build system that has many active branches and each build has a series of integration tests which require a suit of machines to execute. As I migrate from the home grown to hudson I'm not quite sure what's the right way to model this to keep sustainability costs and build times to a minimum
Here's my basic build:
create workspace
compile, link, package
transfer artifacts to test systems
invoke test harness on multiple systems to handle installation and acceptance tests
collect results
publish results
I'd like the integration part to be a group of generic machines (perhaps an elastic group) which can handle integration-tests for any branch. I want to run as much in parallel as possible to keep my build times low. It looks like the best way to execute in parallel on hudson is to break up steps into jobs and use the Parameterized Trigger Plugin to customize the generic jobs.
So, i'd have two main jobs: build, test
I could have on build job per branch and a generic test job. The build job would use Parameterized Trigger Plugin to call the test job and provide the location of the build artifacts. The test job would call a series of jobs in parallel passing down parameters for branch, artifact.
test
test-client-install (params: artifact location, branch)
test-server-install (params: artifact location, branch)
test-run (params: client machine, server machine)
join - collect results (params: client machine, server machine)
Each of the test-* jobs would pull a slave out of the group of slaves and execute. I'm not quite sure how to inform the slaves running the client and server jobs how to find each other nor am I sure how to reserve them from the pool and release them back into it.
I guess, I could have a write properties to a common share and have the sub jobs use that for inter-job communication.
Has anyone created this kind of complex setup in hudson, or is this usually done in another system with which hudson interacts (hudson + STAF with STAF managing resources)?
A few thoughts. The fewer jobs you have, the easier it is to maintain them. The more jobs you have, the more flexible you are and you can run more in parallel. Since you emphasized fast build times, have a look at the join plugin, to run a few jobs in parallel and when all of these are finished to go on with another job in that chain.
If your server is big enough, you can experiment a little bit with the clone workspace plugin. It will help you to reduce the need for manual copying files between jobs and the need for the parametrized trigger plugin.
Reserving a slave is easy. You can group the slaves with labels. In your job you define what label you node has to have in order to execute your job. A node can have more than one label and your job can be bound to more than one label. This way, Hudson decides where to put your job depending on availability. If your slaves have more than one build queue, they can run two jobs in parallel. I haven't used the locks-and-latches plugin for synchronizing across nodes. So I don't know if locks are only per node or for the whole Hudson installation. Latches are not supported yet. If you need to ensure that two jobs need to run on the same slave, try to combine them, otherwise you will lose the advantage of Hudson to distribute your jobs freely over the available nodes.
I'm trying to understand how to configure TFS Team Build to provide a CI solution for my project. I have a fairly common setup that I have several categories of unit tests. For simplicity lets say there are two categories:
Exchange2003
Exchange2007
Each test category needs particular software to be installed on the Build Agent so I would create two Build Agents, BuildAgentEx2003 and BuildAgentEx2007, with the obvious configurations.
Now when I kick off a CI build I want a few things to happen:
Exchange2003 tests to run on BuildAganetEx2003.
Exchange2007 tests to run on BuildAganetEx2007.
All tests categories get run and their results aggregated.
Is that supported and if so how would I configure it.
P.S. In reality of course the situation is very much more complicated. I have a large matrix of test categories and build agents. Each Build Agent would typically be capable of running many different categories of unit tests and each category of tests can be run by one or more Build Agent. The requirement being only that each category of tests be run once for each CI build.cat
Set up one CI build for building the code base. Set up one build for every configuration you need that is a manual build.
After the CI build is successful queue a new build for every configuration using TFSBuild.exe
Pass the original build number to the queued builds as a parameter.
As the last step in the manual builds you publish the test results to the CI build using MSTest.exe
Team Build 2010 should support this scenario out the box - although it will take some work to set up build agents and assign tags to them. But once you do that you should be able to use distributed builds to build and run tests on particular build agents.
It would be much more complicated with Team Build 2008.